# Josef F. Bille *Editor*

# High Resolution Imaging in Microscopy and Ophthalmology

New Frontiers in Biomedical Optics

*Forewords by* Stefan W. Hell and Robert N. Weinreb

High Resolution Imaging in Microscopy and Ophthalmology

Josef F. Bille Editor

# High Resolution Imaging in Microscopy and Ophthalmology

New Frontiers in Biomedical Optics

*Editor* Prof. Dr. Josef F. Bille University Heidelberg Heidelberg Germany

This book is an open access publication. ISBN 978-3-030-16637-3 ISBN 978-3-030-16638-0 (eBook) https://doi.org/10.1007/978-3-030-16638-0

© The Editor(s) (if applicable) and The Author(s) 2019

**Open Access** This book is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this book are included in the book's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the book's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use.

The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Cover Illustration: Courtesy Marco Lupidi, University of Perugia, Italy and Heidelberg Engineering GmbH

This Springer imprint is published by the registered company Springer Nature Switzerland AG. The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

## **Foreword 1**

#### **In Memoriam Dr. Gerhard Zinser**

When I started my graduate work in the laboratories of Heidelberg Instruments GmbH in the Neuenheimer Feld, Gerhard Zinser was already an established senior scientist of the company. There was not a direct overlap in what we both worked on, and we did not work together. However, for a long stretch of time the optics laboratory in which I conducted my experiments was located right next to Gerhard's office. We therefore ran into each other every day.

Gerhard was a luminous example of dedication to his work. He was so passionate about what he was doing; it was a joy to see. I would say he was at the workplace even more than I was, and yes, I was there a lot.

So, one Sunday—it had seemed there was no one else there in the building when I had entered in the morning—I jumped out of the lab in the early afternoon, and there was Gerhard. Of course he was, never mind the Sunday. He was busy preparing a scientific poster for an ophthalmology meeting, describing a new laser scanner. And I couldn't but watch in awe: Gerhard was on the floor of the hallway getting the job done with spectacular efficiency. Note that in those days posters were still cut and pasted manually with glue. He was putting together the poster at lightning speed, a cigarette in the left corner of his mouth, and smiling at me. He then walked me through his poster, and I so clearly remember his joy and enthusiasm for the science. He enjoyed what he was doing, and this joy was exemplary.

Over the following 25 and more years, I got to know a lot of people in optics. I went to meetings in microscopy, sure, but met so many people from the adjacent fields, including ophthalmology. And whenever we talked and I would mention Gerhard, they knew right away who I was talking about. And I was so proud to know him.

Gerhard Zinser was a major player in applied optics and ophthalmology in particular. He made contributions of a lasting impact. I was not at all surprised when he so very successfully embarked on new scientific adventures and responsibilities with his key roles in Heidelberg Engineering. His vision and dedication to excellence will be missed by all who knew him.

> Stefan W. Hell Max Planck Institute for Biophysical Chemistry Göttingen Germany

## **Foreword 2**

#### **Memories**

It was December 1991. The annual meeting of the American Glaucoma Society was taking place at the Hotel del Coronado, just south of San Diego. It was the fourth meeting of the Society that had been founded just several years earlier. And it was just 8 months after opening the Shiley Eye Center in La Jolla at the University of California San Diego, approximately 25 miles north of the meeting site. There was a surprising brief rain shower early that evening in San Diego, and a bus filled with 38 glaucoma colleagues was in transit to La Jolla. They had heard rumors for more than 1 year that there was a new medical imaging device that would enable quantitative and objective imaging of the optic nerve head. Moreover, it would soon be available at a reasonable cost and it would provide for practical patient testing in the office. I am told that those on the bus were excited because many of them thought that they might be viewing the future of glaucoma management. I had lectured and published on optic disc imaging, and hoped that our demonstration would justify the funding of a National Eye Institute grant that I had received several years earlier to study this technology, and validate much of what we had been doing. At the Shiley Eye Center, there was anxiety among almost all of those who had done research or developed the technology over the preceding few years. However, there was one individual who sat on the side, just near the front window, waiting for the bus. He was smoking one cigarette after the other and had his usual smile.

Actually, the idea for imaging the optic disc and retinal nerve fiber layer was not new. Several other technologies had been tested and employed, but never gained traction. Just a few years before the eventful December 1991 demonstration, a commercial confocal scanning laser ophthalmoscope (the Laser Tomographic Scanner (LTS) by Heidelberg Instruments) had been developed and commercialized by the brilliant Josef Bille and his team of engineers and students. At that time, Josef spent increasing amounts of time with us at UCSD and on many of his visits he was accompanied by his students. Uniformly, they were all hardworking, clever, and serious about their work. It was around that time that I first met Gerhard Zinser. Gerhard stood out among the many graduate and postdoctoral students that came to work with Josef and us in San Diego. Not only was he the brightest star, but he was collaborative, insightful, visionary, and just a wonderfully warm person. Ask him a technical question and there always was a thoughtful and comprehensible response. In discussions in the laboratory and also in restaurants (where he would opine over his steak and potatoes and postprandial cigarettes), we spent hours discussing how this technology could be applied to both the optic disc and macula. So many of those hours, we spent just discussing reference planes and analyses.

Gerhard understood well the potential for confocal imaging of the eye. He also understood well the limitations of the ponderous and costly LTS that we were using in our research. So I was not surprised when he told me that he would be developing a next generation instrument. And, I also was not surprised when he said that it was ready for testing.

And that brings us back to December 1991. The new instrument, called the Heidelberg Retina Tomograph or HRT, was relatively compact and inexpensive. With improvements in hardware and software, it was capable of faster and better imaging. It was supposed to arrive well in advance of its demonstration to my glaucoma colleagues. However, it had been delayed at customs in Los Angeles. The instrument finally did arrive, but it did not work to our dismay. It was a fiber, tube, or electrical component that needed replacement. The only replacement would need to be shipped from Germany. We received notification that it was sent, but unfortunately it did not yet make it to La Jolla. A series of phone calls (there was no internet) confirmed its shipment. And, again, we discovered that it was in customs at the Los Angeles Airport. I do not remember who, but someone from Heidelberg Engineering raced to their car and drove 100 miles north to retrieve it. They then raced back again. It was well after midnight; our group, fueled by coffee and colas, were determined to have a functional device for the visitors. I do not remember exactly when the component arrived. But I distinctly remember what happened next. Gerhard jumped into action and began some serious tinkering.

It was afternoon when he said the HRT was ready for testing. What if it still did not image? Or, what if the imaging was not as expected? The only one in the room who had complete confidence that it would work as planned was Gerhard. And sure enough, he plugged in the instrument to an outlet and flipped the switch to turn it on. The room was silent as we waited. Gerhard pressed some buttons and adjusted some things at the keyboard. After being up all night, and waiting throughout the day for the delivery of the component, we learned that the bus had left the hotel and was on its way. And then I never will forget as he walked to the front window, took a seat, lit a cigarette, and then with a broad smile he calmly told us that it was working well.

Our colleagues arrived and to them all seemed just fine. Little did they know what had happened over the preceding 18 h and that we were without sleep. Imaging them one after the other, we could see their excitement. It was then that we knew that we had entered a new era of glaucoma management. It was then that I knew, as well, that by changing the way that we examined the eye, we had entered a new era and there soon would be a new perspective not only for glaucoma, but retina diseases and other eye conditions as well.

Technology has moved forward considerably since then and over the next almost 3 decades. The imaging technologies, available today, particularly optical coherence tomography which was nascent at the time, were almost unimaginable then. And we always will remember Gerhard Zinser as a pioneer, a friend and colleague whose name is synonymous with excellence.

> Robert N. Weinreb, MD Shiley Eye Institute La Jolla, CA USA

# **Preface**

To our knowledge, for the first time, this book provides a comprehensive overview of the application of the newest laser and microscope/ophthalmoscope technologies to the field of high-resolution imaging in microscopy and ophthalmology. Ophthalmologists, physicists, and engineers combine in an interdisciplinary approach to summarize the newest findings of cutting-edge technologies in microscopy and ophthalmology. The newest clinical results of retina and glaucoma diagnostics and therapy control are presented. New findings in the assessment of the anterior segment of the eye are elucidated, providing the basis to innovations in cataract surgery and refractive surgery.

Until recently, the resolution of far-field light microscopy was limited to about 200 nm in the object plane and 600 nm along the optical axis ("Abbe/ Rayleigh limit"). These limits have been substantially overcome by various super-resolution fluorescence microscopy (SRM) methods. SRM allows linking the knowledge gained by molecular methods to cellular structures. In ophthalmology, adaptive optics (AO) has emerged as an empowering technology for retinal imaging with cellular resolution, providing diffractionlimited performance. Combining SRM and AO techniques, breaking the diffraction limit in retinal imaging may become feasible.

Since the first scanning laser ophthalmoscope (SLO) was introduced in the early 1980s, this confocal imaging modality has been adapted and optimized for various clinical imaging applications based on different contrast mechanism. Optical coherence tomography (OCT) has emerged to the forefront of ocular imaging because of the wide variety of information it can provide, its high-resolution images, and the complex 3-dimensional (3D) data it is able to gather.

For ophthalmology, OCT is of particular utility in glaucoma and retinal diseases, since it provides high-resolution objective, quantitative assessment of the retinal cellular layers affected by each disease. Especially since glaucoma is a slowly progressing disease, objective and quantitative measures could potentially provide a more accurate and precise method for the diagnosis of glaucoma and detection of its progression.

Swept-source OCT technology offers inherent characteristics that are suitable for high-resolution anterior segment imaging and analysis. Such capabilities allow for non-contact imaging, detailed visualization, and analysis of anterior segment structures of the human eye including the cornea, anterior chamber, iris, and lens with one device. Swept-source OCT technology can also serve as a tool to measure the axial length of the human eye. The above-mentioned structures and parameters are used in ophthalmology for corneal topography, corneal tomography, anterior segment analysis, biometry, and calculation of intraocular lens power.

Adaptive optics has emerged as an empowering technology for retinal imaging with cellular resolution. This technology holds potential for noninvasive detection and diagnoses of leading eye diseases such as glaucoma, diabetic retinopathy, and age-related macular degeneration (AMD). Recent microstimulation techniques coupled with adaptive optics scanning laser ophthalmoscopy can produce stimuli as small as single photoreceptors that can be directed to precise locations on the retina. This enables direct in vivo study of cone activity and how it relates to visual perception.

The book is supposed to be positioned somewhere at the border between engineering and medicine/biology, i.e., it should address the MD/PhD, who has technical interest and wants to understand the equipment he/she uses, and on the other side the engineer, who wants to understand the applications and the medical/biological background.

The editor is grateful to the authors of this book who have made this multifaceted overview of basic science and engineering as well as clinical topics possible. It was our intention to provide the ophthalmological community with the most recent results in eye diagnostics and surgery.

Finally, I would like to express my special thanks to Agnieszka Biedka, Barbara Hallet, Dr. Bettina Olker, and Katrin Petersen from the Technical Writing department at Heidelberg Engineering GmbH for their continuous professional support in the fields of editorial work, linguistics, and graphics. The editor is also grateful to the editorial group at Springer Nature, London, for their strong support.

This book was made possible due to the initiative of Kfir Azoulay and the enthusiastic support by Arianna Schoess Vargas and Christoph Schoess, the managing directors of Heidelberg Engineering GmbH, honoring the scientific excellence and lifetime achievements of Dr. Gerhard Zinser, cofounder and former managing director of Heidelberg Engineering GmbH.

Heidelberg, Germany Josef F. Bille

# **Acknowledgment**

The editor acknowledges that Heidelberg Engineering GmbH provided a grant to support the open-access publication of this book.

## **Contents**



xvi

# **Contributors**

**Silke Aumann** Heidelberg Engineering GmbH, Heidelberg, Germany

**William H. Baldridge** Department of Medical Neuroscience, Dalhousie University, Halifax, NS, Canada

**Paul Bernstein** Moran Eye Center, University of Utah School of Medicine, Salt Lake City, Utah, USA

**Josef F. Bille** University of Heidelberg, Heidelberg, Germany

**Brett E. Bouma** Wellman Center for Photomedicine, Harvard Medical School, Massachusetts General Hospital, Boston, MA, USA

Institute for Medical Engineering and Science, Massachusetts Institute of Technology, Cambridge, MA, USA

**Christopher Bowd** Ophthalmology, Hamilton Glaucoma Center, Shiley Eye Institute, and Viterbi Family Department of Ophthalmology, University of California San Diego, La Jolla, CA, USA

**Boy Braaf** Wellman Center for Photomedicine, Harvard Medical School, Massachusetts General Hospital, Boston, MA, USA

**Ralf Brinkmann** Medical Laser Center Lübeck GmbH, Lübeck, Germany

**Balwantray C. Chauhan** Ophthalmology and Visual Sciences, Dalhousie University, Halifax, NS, Canada

**Federico Corvi** Eye Clinic, Department of Biomedical and Clinical Science "Luigi Sacco", Sacco Hospital, University of Milan, Milan, Italy

**Johannes F. de Boer** Vrije Universiteit Amsterdam, HV, Amsterdam, The Netherlands

**François Delori** Schepens Eye Research Institute, Harvard University, Boston, MA, USA

**Rosa Dolz-Marco** Heidelberg Engineering, Heidelberg, Germany Unit of Macula, Oftalvist Clinic, Valencia, Spain

**Sabine Donner** Heidelberg Engineering GmbH, Heidelberg, Germany

**Gerit Dröge** Heidelberg Engineering GmbH, Heidelberg, Germany

**Chantal Dysli** Department of Ophthalmology, Inselspital, University of Bern, Bern, Switzerland

**Spring RM. Farrell** Department of Pharmacology, Dalhousie University, Halifax, NS, Canada

**Oliver Findl** Department of Ophthalmology, Hanusch Hospital, Vienna, Austria

**Jörg Fischer** Heidelberg Engineering GmbH, Heidelberg, Germany

**Gesa Franke** Institute of Biomedical Optics, University of Lübeck, Lübeck, Germany

**Andreas Fritz** Heidelberg Engineering GmbH, Heidelberg, Germany

**Maximilian G. O. Gräfe** Vrije Universiteit Amsterdam, HV, Amsterdam, The Netherlands

**Rudolf F. Guthoff** Department of Ophthalmology, University Medical Center Rostock, Rostock, Germany

**Martin Hammer** Universitätsklinikum Jena, Jena, Germany

**Wolf M. Harmening** Department of Ophthalmology, University of Bonn, Bonn, Germany

**Stefan W. Hell** Max Planck Institute for Biophysical Chemistry, Göttingen, Germany

**Dierck Hillmann** Thorlabs GmbH, Lübeck, Germany

**Nino Hirnschall** Department of Ophthalmology, Hanusch Hospital, Vienna, Austria

**Frank G. Holz** Department of Ophthalmology, University of Bonn, Bonn, Germany

**Gereon Hüttmann** Institute of Biomedical Optics, University of Lübeck, Lübeck, Germany

Medical Laser Center Lübeck GmbH, Lübeck, Germany

Airway Research Center North (ARCN), German Center of Lung Research (DZL), Gießen, Germany

**Gopal Swamy Jayabalan** Heidelberg Engineering GmbH, Heidelberg, Germany

**Tschackad Kamali** Heidelberg Engineering GmbH, Heidelberg, Germany

**Yoshihiko Katayama** Heidelberg Engineering GmbH, Heidelberg, Germany

**Ralf Kessler** Heidelberg Engineering GmbH, Heidelberg, Germany

**Sasan Moghimi** Ophthalmology, Hamilton Glaucoma Center, Shiley Eye Institute, and Viterbi Family Department of Ophthalmology, University of California San Diego, La Jolla, CA, USA

**Frank Müller** Heidelberg Engineering GmbH, Heidelberg, Germany

**Philipp L. Müller** Department of Ophthalmology, University of Bonn, Bonn, Germany

Moorfields Eye Hospital, NHS Foundation Trust, Bonn, Germany

**Tobias Neuhann** Augenklinik am Marienplatz, Munich, Germany

**Tilman Otto** Heidelberg Engineering GmbH, Heidelberg, Germany

**Lucia Pace** Department of Biomedical and Clinical Sciences, University of Milano, Milano, Italy

**Clara Pfäffle** Institute of Biomedical Optics, University of Lübeck, Lübeck, Germany

**Melanie Polzer** Heidelberg Engineering GmbH, Heidelberg, Germany

**Boris Považay** HuCE OptoLab, Berne University of Applied Sciences, Switzerland

**Sebastian Rausch** Heidelberg Engineering GmbH, Heidelberg, Germany

**Roland Rocholz** Heidelberg Engineering GmbH, Heidelberg, Germany

**Steffen J. Sahl** Max Planck Institute for Biophysical Chemistry, Göttingen, Germany

**Ruth Sahler** Perfect Lens LLC, Irvine, CA, USA

**Lydia Sauer** Moran Eye Center, University of Utah School of Medicine, Utah, USA

**Stefan Schmidt** Heidelberg Engineering GmbH, Heidelberg, Germany

**Steffen Schmitz-Valckenberg** Department of Ophthalmology, University of Bonn, Bonn, Germany

**Lawrence C. Sincich** Department of Optometry and Vision Science, University of Alabama at Birmingham, Birmingham, AL, USA

**Jacqueline Sousa Asam** Heidelberg Engineering GmbH, Heidelberg, Germany

**Hendrik Spahr** Institute of Biomedical Optics, University of Lübeck, Lübeck, Germany

**Oliver Stachs** Department of Ophthalmology, University Medical Center Rostock, Rostock, Germany

**Giovanni Staurenghi** Department of Biomedical and Clinical Sciences "Luigi Sacco", University of Milan, Milano, Italy

**Markus Stoller** Meridian AG, Thun, Switzerland

**Helge Sudkamp** Institute of Biomedical Optics, University of Lübeck, Lübeck, Germany

Medical Laser Center Lübeck GmbH, Lübeck, Germany

**Hui Sun** University of Chinese Academy of Sciences, Beijing, China

**Ali Tafreshi** Heidelberg Engineering GmbH, Heidelberg, Germany

**Néstor Uribe-Patarroyo** Wellman Center for Photomedicine, Harvard Medical School, Massachusetts General Hospital, Boston, MA, USA

**Benjamin J. Vakoc** Wellman Center for Photomedicine, Harvard Medical School, Massachusetts General Hospital, Boston, MA, USA

Institute for Medical Engineering and Science, Massachusetts Institute of Technology, Cambridge, MA, USA

**Julian Weichsel** Heidelberg Engineering GmbH, Heidelberg, Germany

**Robert N. Weinreb** Shiley Eye Institute, La Jolla, CA, USA

**Sebastian Wolf** Department of Ophthalmology, University of Berne, Berne, Switzerland

**Linda M. Zangwill** Shiley Eye Institute, La Jolla, CA, USA

**Martin S. Zinkernagel** Department of Ophthalmology, Inselspital, University of Bern, Bern, Switzerland

**Part I**

**Breaking the Diffraction Barrier in Fluorescence Microscopy**

# **High-Resolution 3D Light Microscopy with STED and RESOLFT**

Steffen J. Sahl and Stefan W. Hell

**We discuss the simple yet powerful ideas which have allowed to break the diffraction resolution limit of lens-based optical microscopy. The basic principles and standard implementations of STED (stimulated emission depletion) and RESOLFT (reversible saturable/switchable optical linear (fluorescence) transitions) microscopy are introduced, followed by selected highlights of recent advances, including MINFLUX (minimal photon fluxes) nanoscopy with molecule-size (~1 nm) resolution.**

We are all familiar with the sayings "a picture is worth a thousand words" and "seeing is believing". Not only do they apply to our daily lives, but certainly also to the natural sciences. Therefore, it is probably not by chance that the historical beginning of modern natural sciences very much coincides with the invention of light microscopy. With the light microscope mankind was able to see for the first time that every living being consists of cells as basic

S. J. Sahl (\*)

Max Planck Institute for Biophysical Chemistry, Göttingen, Germany e-mail: Steffen.Sahl@mpibpc.mpg.de

S. W. Hell Max Planck Institute for Biophysical Chemistry, Göttingen, Germany

Max Planck Institute for Medical Research, Heidelberg, Germany e-mail: Stefan.Hell@mpibpc.mpg.de

units of structure and function; bacteria were discovered with the light microscope, and also mitochondria as examples of subcellular organelles.

However, we learned in high school that the resolution of a light microscope is limited to about half the wavelength of the light [1–4], which typically amounts to about 200–350 nm. If we want to see details of smaller things, such as viruses for example, we have to resort to electron microscopy. Electron microscopy has achieved a much higher spatial resolution—tenfold, hundred-fold or even thousand-fold higher; in fact, down to the size of a single molecule. Therefore the question comes up: Why do we care for the light microscope and its spatial resolution, now that we have the electron microscope?

The first reason is that light microscopy is the only way in which we can look inside a living cell, or even living tissues, in three dimensions; it is minimally invasive. But, there is another reason. When we look into a cell, we are usually interested in a certain species of proteins or other biomolecules, and we have to make this species distinct from the rest—we have to "highlight" those proteins [5]. This is because, to light or to electrons, all the proteins look the same.

In light microscopy this "highlighting" is readily feasible by attaching a fluorescent molecule to the biomolecule of interest [6]. Importantly, a fluorescent molecule [7] has, among others, two fundamental states: a ground

**1**

<sup>©</sup> The Author(s) 2019 3 J. F. Bille (ed.), *High Resolution Imaging in Microscopy and Ophthalmology*, https://doi.org/10.1007/978-3-030-16638-0\_1

state and an excited fluorescent state with higher energy. If we shine light of a suitable wavelength on it, for example green light, it can absorb a green photon so that the molecule is raised from its ground state to the excited state. Right afterwards the atoms of the molecule wiggle a bit that is why the molecules have vibrational sub-states—but within a few nanoseconds, the molecule relaxes back to the ground state by emitting a fluorescence photon.

Because some of the energy of the absorbed (green) photon is lost in the wiggling of the atoms, the fluorescence photon is red-shifted in wavelength. This is actually very convenient, because we can now easily separate the fluorescence from the excitation light, the light with which the cell is illuminated. This shift in wavelength makes fluorescence microscopy extremely sensitive. In fact, it can be so sensitive that one can detect a single molecule, as has been discovered through the works of W. E. Moerner [8], of Michel Orrit [9] and their co-workers.

However, if a second molecule, a third molecule, a fourth molecule, a fifth molecule and so on are positioned closer together than about 200– 350 nm, we cannot tell them apart, because they appear in the microscope as a single blur. Therefore, it is important to keep in mind that resolution is about telling features apart; it is about distinguishing them. Resolution must not be confused with sensitivity of detection, because it is about seeing different features as separate entities.

#### **1.1 Breaking the Diffraction Barrier in the Far-field Fluorescence Microscope**

Now it is easy to appreciate that a lot of information is lost if we look into a cell with a fluorescence microscope: anything that is below the scale of 200 nm appears blurred. Consequently, if one manages to come up with a focusing (farfield) fluorescence microscope which has a much higher spatial resolution, this would have a tremendous impact in the life sciences and beyond.

In a first step, we have to understand why the resolution of a conventional light-focusing microscope is limited. In simple terms it can be explained as follows. The most important element of a light microscope is the objective lens (Fig. 1.1). The role of this objective lens is simply to concentrate the light in space, to focus the light down to a point. However, because light propagates as a wave, it is not possible for the lens to concentrate the light in a single point. Rather the light will be diffracted, "smeared out" in the focal

**Fig. 1.1** Focusing of light by the microscope (objective) lens cannot occur more tightly than the diffraction (Abbe's) limit. As a result, all molecules within this diffraction-limited region are illuminated together, emit virtually together, and cannot be told apart. Verdet [2], Abbe [1], Helmholtz [4], Rayleigh [3]

*Verdet (1869) Abbe (1873) Helmholtz (1874) Rayleigh (1874)*

region, forming a spot of light which is—at minimum—about 200 nm wide and about 500 nm along the optical axis [10]. This has a major consequence: if several features fall within this region, they will all be flooded with this light at the same time and hence produce signal simultaneously. In the case of fluorescence microscopy, this is the excitation light. As we try to detect the fluorescence signal with a lens and relay it onto a detector, the signals produced by the molecules within this >200-nm spot will be confused. This is because at the detector, each molecule will also produce a spot of focused (fluorescence) light and the spots from these simultaneously illuminated molecules will overlap (Fig. 1.1). No detector will be able to tell the signals from these molecules apart, no matter if it is the eye, a photomultiplier, or even a pixelated camera.

The person who fully appreciated that diffraction poses a serious limit on the resolution was Ernst Abbe, who lived at the end of the nineteenth century and who coined this "diffraction barrier" in an equation which has been named after him [1]. It says that, in order to be separable, two features of the same kind have to be further apart than the wavelength divided by twice the numerical aperture of the objective lens. One can find this equation in most textbooks of physics or optics, and also in textbooks of biochemistry and molecular biology, due to the enormous relevance of light microscopy in these fields. Abbe's equation is also found on a memorial which was erected in Jena, Germany, where Ernst Abbe lived and worked, and there it is written in stone. This is what scientists believed throughout the twentieth century. However, not only did they believe it, it also was a fact. For example, if one wanted to look at features of the cellular cytoskeleton in the twentieth century [5], this was the type of resolution obtained.

This equation was coined in 1873. So much new physics emerged during the twentieth century and so many new phenomena were discovered. There should be phenomena—at least one—that could be utilized to overcome the diffraction barrier in a light microscope operating with propagating beams of light and regular lenses. S.W.H. understood that it won't work just by changing the way the light is propagating, the way the light is focused. [Actually he had looked into that; it led him to the invention of the 4Pi microscope [11, 12], which improved the axial resolution, but did not overcome Abbe's barrier.] S.W.H. was convinced that a potential solution must have something to do with the major discoveries of the twentieth century: quantum mechanics, molecules, molecular states and so on.

Therefore, he started to check his textbooks again in order to find something that could be used to overcome the diffraction barrier in a light-focusing microscope. In simple terms, the idea was to check out the spectroscopic properties of fluorophores, their state transitions, and so on; maybe there is one that can be used for the purpose of making Abbe's barrier obsolete. Alternatively, there could be a quantum-optical effect whose potential has not been realized, simply because nobody thought about overcoming the diffraction barrier [13].

With these ideas in mind, one day when he was not very far from [Stockholm] in Åbo/Turku, just across the Gulf of Bothnia, on a Saturday morning, S.W.H. browsed a textbook on quantum optics [14] and stumbled across a page that dealt with stimulated emission. All of a sudden he was electrified. Why?

To reiterate, the problem is that the lens focuses the light in space, but not more tightly than 200 nm. All the features within the 200-nm region are simultaneously flooded with excitation light. This cannot be changed, at least not when using conventional optical lenses. But perhaps we can change the fact that all the features which are flooded with (excitation) light are, in the end, capable of sending light (back) to the detector. If we manage to keep some of the molecules dark to be precise, put them in a non-signaling state in which they are not able to send light to the detector—we will see only the molecules that can, i.e. those in the bright state. Hence, by registering bright-state molecules as opposed to dark-state molecules, we can tell molecules apart. So the idea was to keep a fraction of the molecules residing in the same diffraction area in a dark state, for the period of time in which the molecules residing in this area are detected. In any

**Fig. 1.2** Switching molecules within the diffractionlimited region transiently "off" (i.e. effectively keeping them in a non-signaling state), enables the separate detection of neighbouring molecules residing within the same diffraction region. (**a**) In fluorescence microscopy operating with conventional lenses (e.g. confocal microscopy), all molecules within the region covered by the main diffraction maximum of the excitation light are flooded with excitation light simultaneously and emit fluorescence

together. This is because they are simultaneously allowed to assume the fluorescent (signalling) state. (**b**) Keeping most molecules—except the one(s) one aims to register in a dark state solves the problem. The dark state is a state from which no signal is produced at the detector. Such a transition to the dark "off" state is most simply realized by inducing stimulated emission, which instantaneously forces molecules to their dark ("off") ground state

case, keep in mind: the state (transition) is the key to making features distinct. And resolution is about discerning features.

For this reason, the question comes up: are there dark states in a fluorescent molecule? The answer is actually contained in the energy diagram in Fig. 1.2b. The ground state of the fluorophore is a dark state! For the molecule to emit fluorescence, the molecule has to be in its excited state. So the excited state is the signaling bright state, but the ground state is, of course, a non-signaling dark state.

What is now the role of stimulated emission? Actually, the answer is as simple as profound: it makes dark molecules, that is, molecules that are not seen by the detector! This was the reason why S.W.H. was so excited. He had found a way to make normal fluorophores not fluoresce, just normal fluorophores that were commonly used in fluorescence microscopy. And now you can easily envisage how the microscope works: stimulated emission depletion—or: STED microscopy [15–23]. Figure 1.3a sketches the lens, the critical component of a far-field optical

**STED microscope:**

**Fig. 1.3** STED microscopy. (**a**) Setup schematic. (**b**) Region where the molecule can occupy the "on" state (green) and where it has to occupy the "off" state (red). (**c**) Molecular transitions. (**d**) For intensities of the STED light (red) equalling or in excess of the threshold intensity *Is*, molecules are effectively switched "off". This is

microscope, as well as a sample and a detector. We use a beam of light for exciting molecules from the ground state to the excited state, to make them bright ("on"), i.e. get them to the excited state. Inevitably, the excitation light will be diffracted and one obtains a spot of light of at least 200 nm. Signal which is produced therein, from all the molecules, will be able to end up at the detector. But now, we use a second beam of light which induces stimulated emission, and thus makes dark-state molecules. The idea is to instantly "push" the molecules that were excited back down to the ground state so that the molecule is not capable of emitting light, because it has assumed the dark ground state ("off").

The physical condition for achieving this is that the wavelength of the stimulating beam is

because the STED light will always provide a photon that will stimulate the molecule to instantly assume the ground state, even in the presence of excitation light (green). Thus, the presence of STED light with intensity greater than *Is* switches the ability of the molecules to fluoresce off. Hell and Wichmann, Opt Lett [15]

longer (Fig. 1.3c). The photons of the stimulating beam have a lower energy, so as not to excite molecules but to stimulate the molecules going from the excited state back down to the ground state. There is another condition, however: we have to *ensure* that there is indeed a red photon at the molecule which pushes the molecule down. We emphasize this because most red-shifted photons pass by the molecules, as there is a finite interaction probability of the photon with a molecule, i.e. a finite cross-section of interaction. But if one applies a stimulating light intensity at or above a certain threshold, one can be sure that there is at least one photon which "kicks" the molecule down to the ground state, thus making it instantly assume the dark state.

Figure 1.3d shows the probability of the molecule to assume the bright state, the S1, in the presence of the red-shifted beam transferring the molecule to the dark ground state. Beyond a certain threshold intensity, *Is*, the molecule is clearly turned "off". One can apply basically any intensity of green light. Yet, the molecule will not be able to occupy the bright state and thus not signal. Now the approach is clear: we simply modify this red beam to have a ring shape in the focal plane [19, 24], such that it does not carry any intensity at the centre. Thus, we can turn off the fluorescence ability of the molecules everywhere but at the centre. The ring or "doughnut" becomes weaker and weaker towards the centre, where it is ideally of zero intensity. There, at the centre, we will not be able to turn the molecules off, because there is no STED light, or it is much too weak.

Now let's have a look at the sample (Fig. 1.3b) and let us assume that we want to see just the fibre in the middle. Therefore, we have to turn off the fibre to its left and the one to its right. What do we do? We cannot make the ring smaller, as it is also limited by diffraction. Abbe would say: "Making narrower rings of light is not possible due to diffraction." But we do *not* have to do that. Rather, we simply have to "shut off" the molecules of the fibres that we do *not* want to see, that is, we make their molecules dwell in a dark state, until we have recorded the signal from that area. Obviously, the key lies in the preparation of the states. So what do we do? We make the beam strong enough so that the molecules even very close to the centre of the ring are turned "off" because they are effectively confined to the ground state all the time. This is because, even close to the centre of the ring, the intensity is beyond the threshold *Is* in absolute terms.

Now we succeed in separation: only in the position of the doughnut centre are the molecules allowed to emit, and we can therefore separate this signal from the signal of the neighbouring fibres. And now we can acquire images with subdiffraction resolution: we can move the beams across the specimen and separate each fibre from the other, because their molecules are forced to emit at different points in time. We play an "*on/ off game*". Within the much wider excitation region, only a subset of molecules that are at the centre of the doughnut ring are allowed to emit at any given point in time. All the others around them are effectively kept in the dark ground state. Whenever one makes a check which state they are in, one will nearly always find those molecules in the ground state.

This concept turned out to work very well [17, 19, 23, 25]. Figure 1.4a contains a standard, highend confocal recording of something which one cannot make out what it is. Figure 1.4b shows the same region imaged using STED microscopy. The resolution is increased by about an order of magnitude (in the red channel), and one can clearly discern what is actually being imaged here: nuclear pore complexes. As a result of the high resolution, it can be seen that this nuclear pore complex features eight molecular subunits. The eightfold symmetry comes out very clearly [25]. There is almost no comparison with the standard confocal recording.

Needless to say, if afforded this increase in spatial resolution, one obtains new information. In other words, new insights are gained with this microscope. Here, we briefly describe research done in collaboration with virologists interested in the human immunodeficiency virus (HIV). Generally, viruses are about 30–150 nm in diameter [5]. So, if one wants to image them with a light microscope … there is no chance this will succeed—one will not see any details of protein distributions on the virus particles. A diffractionlimited fluorescence microscope would yield just a 250–350 nm sized fluorescence blur. The human immunodeficiency virus (HIV) is about 140 nm in size. The collaborating scientists were interested in finding out how a protein called Env is distributed on the HIV particle [26], Fig. 1.5. In the normal recording, nothing specific is seen. In contrast, the high-resolution STED recording revealed that the protein Env forms patterns on the HIV particles. What has actually been found out in this study is that the mature HIV particles those which are ready to infect the next cell have the Env concentrated basically in a single place on the virus. It seems to be a requirement for HIV to be very effective—a new mechanistic insight gained as a result of subdiffractionresolution imaging.

G**ö**ttfert, Wurm *et al* Biophys J (2013)

Of course, a strength of light microscopy is that we can image living cells by video-rate recording with STED microscopy. An example are synaptic vesicles in the axon of a living neuron [20]. One can directly see how they move about and we can study their dynamics and their fate over time. It is clearly important to be able to image living cells.

Live-cell imaging "at the extreme" is pictured in Fig. 1.6. Here, we opened the skull of an anaesthetized mouse and looked into the brain of the mouse at the upper, so-called molecular layer of the visual cortex [21]. This was a transgenic mouse, meaning that some of its neurons expressed a fluorescent protein, specifically the yellow fluorescent protein YFP, and this is why this neuron is highlighted from the surrounding brain. The surrounding brain tissue is dark. Next we took sequential recordings and could see the receiving synaptic ends of the neuron—the socalled dendritic spines. They move slightly, and it is worthwhile zooming in on them. One discerns the spine neck and, in particular, the details of the cup-shaped end of the dendritic spines. STED microscopy allows these tiny morphologies to be visualized, such that we can observe their subtle temporal changes. I am very confident that in the not too distant future we will be able to image the proteins here at the synapse [27]. I can also imagine that we will be able to give a visual cue to the

**Fig. 1.5** STED nanoscopy of the HIV Envelope protein Env on single virions. Confocal microscopy is not able to reveal the nanoscale spatial distribution of the Env proteins; the images of the Env proteins on the virus particles look like 250–350 nm sized blurred spots (orange, left column). STED microscopy reveals that the Env proteins form spatial patterns (center column, orange), with mature particles having their Env strongly concentrated in space (panel in top row of center column, orange). The data was published in [26]

**Fig. 1.6** STED nanoscopy in living mouse brain. The recording shows a part of a dendrite of a neuron expressing a yellow fluorescent protein (EYFP) in the cytosol, thus highlighting the neuron amidst surrounding (nonlabelled) brain tissue. The three to fourfold improved resolution over confocal and multiphoton excitation fluorescence microscopy reveals the dendritic spines (encircled) with superior clarity, particularly the cup-like shape of some of their terminals containing the receiving side of the synapses. The data was published in [21]

mouse and observe how this actually changes the protein distribution directly at the synapse. Thus, in the end we should learn how neuronal communication or memory formation works at the molecular level. Since STED microscopy relies on freely propagating light, one can perform three-dimensional (3D) imaging. It is possible to focus into the brain tissue, for example, and record a 3D data set.

Coming back again to the basics, to the spatial resolution, some will ask: What is the resolution we can get? What is the limit? Indeed, is there a new limit? So let us get back to the principle. The "name of the game" is that we turn off molecules everywhere but at the intensity minimum, at the central zero, of the STED beam [28–31]. If we can make the region in which the molecules are still allowed to emit smaller, the resolution is improved; that is clear. The extent (or diameter) of the region in which the molecules are still "on" now determines the spatial resolution. Clearly, it cannot be described by Abbe's equation any more. In fact, this diameter must depend on the intensity *I* which is found at the doughnut crest (Fig. 1.7b, d) and on the threshold intensity *Is*, which is a characteristic of the photon-molecule interaction. The larger their ratio becomes, the smaller *d* will become. It is now easy to appreciate that this ratio must be found in the denominator, if we describe the resolution with a new equation which is now obviously required [23, 28, 29]. In fact, *d* scales inversely with the square root of *I/Is*. So the larger *I/Is*, the smaller is *d* (Fig. 1.8). As a result, *d* tends to 0 for larger and larger values of *I/Is* (Fig. 1.7b, d).

In the situation depicted in Fig. 1.7b, we cannot separate two of the close-by molecules because both are allowed to emit at the same

**Fig. 1.7** (**a**–**d**) Resolution scaling in the STED/ RESOLFT concepts: an extension of Abbe's equation. The resolution scales inversely with the square-root of the

ratio between the maximum intensity at the doughnut crest and the fluorophore-characteristic threshold intensity *Is*

**Fig. 1.8** Tunable resolution enhancement realized by STED microscopy. (**a**, **b**) Confocal (**a**) and STED (**b**) image of fluorescent beads with average size of ~24 nm on a cover slip. (**c**–**g**) The area of the white rectangle shown in (**a**) and (**b**) recorded with different STED intensities.

The resolution gain can be directly appreciated. (**h**) STED depletion η vs. STED-light intensity measured on the same sample. The intensity settings for the measurements (**c**–**g**) are marked by red arrows. Scale bars 1 μm (**a**, **b**), 200 nm (**c**–**g**). Reproduced with permission from [33]

time. But let us make the beam a bit stronger, so that only one molecule "fits in" the region in which the molecules are allowed to be "on". Now the resolution limit is apparent: it is the size of a *molecule*, because a molecule is the smallest entity one can separate. After all, we separate features by preparing their molecules in two different states, and so it must be the molecule which is the limit of spatial resolution. When two molecules come very close together, we can separate them because at the time one of them is emitting, the other one is "off" and vice versa [28, 30–32].

It is worth noting that if all the "off" or dark molecules are entirely dark, i.e. non-signaling, detecting a *single* photon from a molecule is absolutely enough to know that there is a molecule present (at the minimum of the STEDbeam). The position of that molecule is entirely determined by the presence of the STED-beam photons. These photons determine exactly *where* the molecule is "on" and where it is "off" (dark). The detected fluorescence photons only indicate the presence of a molecule, or many of them [30–32].

Does one typically obtain molecular spatial resolution, and what about in a cell? For STED microscopy right now, the standard of resolution is between 20 and 40 nm depending on the fluorophore, and depending on the fluorophore's chemical environment [25]. But this is something which is progressing; it is under continuous development. With fluorophores which have close-to-ideal properties and can be turned "on" and "off" as many times as desired, we can do much better, of course.

In fact, there are such fluorophores—not organic ones, inorganic ones—which meet this requirement already. These are so-called charged nitrogen vacancies in diamond (Fig. 1.9), fluorescent defects in diamond crystals which can be turned on and off an almost unlimited number of

**Fig. 1.9** Fluorophores affording virtually unlimited repetitions of the resolution-enabling on-off state transitions provide the present resolution records in far-field optical imaging using STED, in the single-digit nanometer regime. Color centers (charged nitrogen vacancy centers) in diamond hold great potential for various other applica-

tions, notably in magnetic sensing and quantum information, which may be eventually read out with diffraction-unlimited spatial resolution using conventional lenses, i.e. even when packed very densely at the nanometer scale

times [34]. Imaging these, the region of emission was squeezed down to 2.4 nm [35]. It is worth keeping in mind that the wavelength responsible for this result is 775 nm. So the region of emission is smaller than 1%, a very small fraction of the wavelength.

This may look like a proof-of-principle experiment, and to some extent it is. But it is not just that, there is another reason to perform these experiments [34, 36, 37]. The so-called charged nitrogen vacancies are currently regarded as attractive candidates for quantum computation: as qubits operating at room temperature [38, 39]. They possess a spin state with a very long coherence time and which can be prepared and read out optically. Being less than a nanometer in size, they can sense magnetic fields at the nanoscale [40, 41]. There inherently are nanosensors in there, and STED is perhaps the best way of reading out the state and the magnetic fields at the nanoscale. In the end, this could make STED an interesting candidate perhaps for reading out qubits in a quantum computer, or who knows … Development goes on!

Returning to the fundamentals, we have emphasized that the name of the game is "on/ off", or keeping a fraction of the molecules dark for separation [30–32]. This is how we separate molecules, with a bright state and a dark state. Once it is clear that this is a general principle it is obvious that stimulated emission is not the only way by which we can play this "on/off game". There must also be other "on" and "off" states in a dye which one can use to the same effect [22, 28–30]. With this in mind, S.W.H. browsed other textbooks and found that there are triplet states, long-lived dark states and, of course, in chemistry textbooks, one will find that there is photoinduced cis-trans isomerization (Fig. 1.10). One might ask why use these special transitions that, unlike stimulated emission, are not found in absolutely any fluorophore, as special fluorophores are needed for this? After all, the transitions used in STED are truly basic: optical excitation and de-excitation. And the two states between which these transitions are induced are the most basic states imaginable, namely the ground and the first excited state.

Indeed, it turns out that there is a strong reason for looking into other types of states and state transitions. Consider the state lifetimes (Fig. 1.10). For the basic STED transition, the lifetime of the state, the excited state, is nanoseconds (Fig. 1.10a). For metastable dark states used in methods termed ground state depletion (GSD) microscopy [42–44] (Fig. 1.10b) the lifetime of the state is microseconds, and for isomerization it is on the order of milliseconds (Fig. 1.10c). Why are these major increases in the utilized state lifetime relevant?

Well, just remember that we separate adjacent features by transferring their fluorescent molecules into two different states. But if the state—one of the states—disappears after a nanosecond, then the *difference in states* created disappears after a nanosecond. Consequently, one has to hurry up putting in the photons, creating this difference in states, as well as reading it out, before it disappears. But if one has more time—microseconds, milliseconds—one can turn molecules off, read the remaining ones out, turn on, turn off ….; they stay there, because their states are long-lived. One does not have to hurry up putting in the light, and this makes this "separation by states" operational at *much* lower light levels [28, 42].

To be more formal, the aforementioned intensity threshold *Is* scales inversely with the lifetime of the states involved (Fig. 1.10e): the longer the lifetime, the smaller is the *Is*, and the diffraction barrier can be broken using this type of transition at much lower light levels. *Is* goes down from megawatts (STED), kilowatts (GSD) down to watts per square centimetre for millisecond switching times—a six orders of magnitude range [28]. This makes transitions between longlived states very interesting, of course. Here in the equation (Fig. 1.10d), *Is* goes down and with that of course also *I* goes down because one does

Principle: Discern by **ON / OFF** states in the sample

**Fig. 1.10** States and state transitions utilized in (**a**) STED, (**b**) GSD and (**c**) RESOLFT nanoscopy. (**d**) The intensity *Is* for guaranteeing the transition from the on- to the off-state is inversely related to the state lifetime. The

longer the lifetime of the involved states, the fewer photons per second are needed to establish the on-off state difference which is required to separate features residing within the diffraction barrier

not need as many photons per second in order to achieve the same resolution *d*.

The cis-trans isomerization is particularly interesting because it is found in switchable fluorescent proteins. S.W.H. looked into this very early on to check whether it can be used for a STED-like recording. Eventually, S.W.H. called it RESOLFT, for "Reversible Saturable/ Switchable Optically Linear (Fluorescence) Transitions" [28, 45–47], simply because he could not have called it STED anymore. There is no stimulated emission in there, which is why he had to give it a different name. The strength is not only that one can obtain high resolution at low light levels. Notably, one can use inexpensive lasers, continuous wave (CW) lasers, and/or spread out the light over a large field of view, because one does not need such intense light to switch the molecules. In this way, one can parallelize the recordings, meaning that one can make an array of many holes (intensity minima, zeros) at the same time and read out a large field of view quickly (Fig. 1.11). It does not matter that one has many of these intensity minima at the same time. As long as they are each further apart than Abbe's diffraction barrier, they can be read out simultaneously by projecting the signal generated in this array of minima onto a camera. Only a few scanning steps in one direction and in the orthogonal direction, and a super-resolution image of a large field of view is taken. In Fig. 1.12 [48], a living cell was recorded within 2 s with more than 100,000 "doughnuts", so to speak, in parallel.

**Fig. 1.11** Parallelization of the STED/RESOLFT concept holds the key to faster imaging. The diffraction problem has to be addressed only for molecules residing within a diffraction-limited region. Thus, many intensity minima ('doughnuts') are produced, at mutual distances

greater than the diffraction limit, for highly efficient scanning of large sample areas. The use of highly parallelized schemes is greatly facilitated by harnessing transitions between long-lived molecular on-off states, such as cis/ trans

Notwithstanding the somewhat different optical arrangement, the key is the molecular transition. Selecting the right molecular transition determines the parameters of imaging. The imaging performance, including the resolution and the contrast level, as well as other factors, is actually determined by the molecular transition chosen [32].

Putting up the next question, what does it take to achieve the best resolution? Now let us assume one had asked this question in the twentieth century. What would have been the answer? Well,

the answer was unquestionably: good lenses [10]. Sure, good lenses. Why? Because the separation of neighbouring features was performed by the *focusing of light*. Then, of course, one needs good lenses to produce the sharpest focal spot of light at the sample here, there, and everywhere, and/or the sharpest focal spot of light anywhere at the detector. However, once one cannot produce an even smaller focal spot of light, this strategy has come to an end (Fig. 1.13, top). Therefore, if one has several features falling within a diffractionlimited spot of light, one simply cannot do any better. Resolution is definitely limited by diffraction if one separates features by the focusing of light—no way to tell features, the molecules, apart, because everything overlaps on the detector (Fig. 1.13, top). So what was the solution to this problem?

*Do not separate just by focusing. Separate by molecular states*, in the easiest case by "on/off" states [28–31]. If separating by molecular states, one can indeed distinguish the features, one can tell the molecules apart even though they reside within the region dictated by diffraction. We can tell, for instance, one molecule apart from its neighbours and discern it (Fig. 1.13, bottom). For

**Fig. 1.14** Both in coordinate-targeted and in coordinatestochastic nanoscopy methods, many photons are required to define or establish, respectively, molecular coordinates at subdiffraction scales. In the coordinate-targeted mode (STED, RESOLFT, etc.), the coordinates of (e.g.) the "on" state are established by *illuminating* the sample with a pattern of light featuring an intensity zero; the location of the zero and the pattern intensity define the coordinates with subdiffraction precision. In the coordinate-stochastic mode

this purpose, we have our choice of states that I have introduced already (Fig. 1.10) which we can use to distinguish features within the diffraction region.

In the methods described, STED, RESOLFT and so on, the position of the state—where the molecule is "on", where the molecule is "off" is determined by a pattern of light featuring one or more intensity zeros, for example a doughnut. This light pattern clearly determines where the molecule has to be "on" and where it has to be "off". The coordinates X, Y, Z are tightly controlled by the incident pattern of light and the position(s) of its zero(s). Moving the pattern to the next position X, Y, Z—one knows the position of the occurrence of the "on" and "off" states already. One does not necessarily require many *detected* photons from the "on" state molecules, because the detected photons are merely indicators of the presence of a feature. The occurrence

(PALM, STORM etc.), the coordinates of the randomly emerging "on"-state molecules are established by analysing the light patterns *emitted* by the molecules (localization). Precision of the spatial coordinates increases in both cases with the number of photons in the patterns of the spatial coordinates, i.e. by the intensity of the pattern. In both families of methods, neighbouring molecules are discerned by transiently creating different molecular states in the sample. The references shown are to [8, 9, 49, 51, 52] described in the text

of the state and its location is fully determined by the incident light pattern.

Now the question comes up: How does this compare with the seminal invention by Eric Betzig [49], based on the discovery of W. E. Moerner [8, 50], that you can detect single molecules? In the PALM ("Photo-Activated Localization Microscopy") [49] concept (also called STORM or FPALM [51, 52]), there are two fundamental differences to STED-like approaches (Fig. 1.14). First of all, it critically relies on the detection of single molecules. Secondly, unlike in the STED case, in the PALM case the spatial position of the on-state is uncontrolled, totally stochastic. A molecule "pops up" somewhere randomly in space, a single molecule per diffraction-sized region, and it is in this way that the "on"/"off" state difference is created. But since one does not know where a molecule has turned to the on-state, a *pattern of light* must be used with which one can measure the position. This pattern of light is the fluorescent light which is emitted by the molecule and imaged onto an array-detector, usually a camera. The pixels of the camera provide the coordinate reference. Without going into the details, this pattern of emitted fluorescence light allows one to determine the molecule's position with a centroid calculation.

An interesting insight here is that one needs a *bright* pattern of emitted light to *find out* the position just as one needs a bright pattern of incident light in STED/RESOLFT to *determine* the position of emission. Not surprisingly, one *always* needs bright patterns of light when it comes to positions, because if one has just a single photon, this alone tells nothing. The photon can go anywhere within the realm of diffraction, there is no way to control where it goes within the diffraction zone. In other words, when dealing with positions, one needs *many* photons by definition, because this is inherent to diffraction. Many photons are required for defining positions of "on" and "off"-state-molecules in STED/RESOLFT microscopy, just as many photons are required to find out the position of "on"-state molecules in the stochastic method PALM.

One is not confined to using a single doughnut (a single diffraction zone) in STED/ RESOLFT. We can use a "widefield" arrangement, meaning that we can also record a large field of view (compare the blue pattern in Fig. 1.11). To this end, we parallelize the scanning using an array of intensity minima, such as an array of doughnuts. Again, the fundamental difference to the spatially stochastic methods is (Fig. 1.15) that the positions where the molecules can assume the "on-" or the "off-"state are tightly controlled by the pattern of light with which we illuminate the sample. This is regardless of whether there is one molecule at the intensity minimum of the pattern, or three molecules; however many, it does not matter.

Although the PALM principle can also be implemented on a single diffraction zone only (i.e. using a single focused beam of light), it is usually implemented in a "parallelized" way, i.e. on a larger field of view containing many diffraction zones. PALM parallelization requires that there may be only a single "on"-state molecule within a diffraction zone, i.e. within the distance dictated by the diffraction barrier. However, the position of this molecule is completely random. Therefore, we have to make sure that the "on" state molecules are far enough apart from each other, so that they are still identifiable as separate molecules. While in (STED/RESOLFT) the position of a certain state is given by the pattern of light falling on the sample, position in PALM is established from the pattern of (fluorescence) light coming out of the sample.

What does *I*/*I*s in STED/RESOLFT stand for? *I*s can be seen as the number of photons that one needs to ensure that there is at least one photon interacting with the molecule, pushing it from one state to the other in order to create the required difference in molecular states. *I*/*I*s is, so to speak, the number of photons which really "can do something" at the molecule while most of the others just "pass by". Similarly, in the PALM concept, the number of photons *n* in 1/√(*n*) is the number of those photons that are detected, i.e. that really contribute to revealing the position of the emitting molecule. In other words, in both concepts, to attain a high coordinate precision, one needs *many* photons that really do something. This analogy very clearly shows the importance of the number photons to achieve coordinate precision in both concepts.

However, in both cases the separation of features is, of course, accomplished by an "on/off" transition [28–31]. This is how we make features distinct, how we tell them apart. As a matter of fact, all the super-resolution methods which are in place right now and really useful, achieve molecular distinguishability by transiently placing the molecules that are closer together than the diffraction barrier in two different states for the time period in which they are jointly scrutinized by the detector. "Fluorescent" and "nonfluorescent" is the easiest pair of states to play with, and so this is what has worked out so far.

One can take the point of view that in the twentieth century it was the lenses which were decisive. And the lens makers ruled the field. One had to go to them and ask them for

**Fig. 1.15** To parallelize STED/RESOLFT scanning, a "widefield" arrangement with an array of intensity minima (e.g. an array of doughnuts) may be used. The numbers of molecules at these readout target coordinates do not matter, while PALM requires that there may be only a single "on"-state molecule within a diffraction zone, i.e. within the distance dictated by the diffraction barrier. [More precisely: the number of molecules per diffraction zone has to be so low that each molecule is recognized individually.] The position of each on-state molecule is however completely random in space. *I*s can be regarded as the number of photons that one needs to ensure that there is at least one photon interacting with

the best lenses to get the best resolution. But how is it today? No, it is not the lens makers. This resolution game is not about lenses anymore. It is about molecular states, and molecular states are of course about *molecules*. The molecules determine now how well we can image; they determine the spatial resolution. And that is not optical technology—that is *chemistry*. In a way this was initially a physics problem—the diffraction barrier certainly was, no doubt about it—which has now evolved into a chemistry topic.

the molecule, pushing it from one state to the other in order to create the required difference in molecular states. *I*/*I*s is, so to speak, the number of photons which really elicit the (on/off) state transition at the molecule, while most of the others just "pass by". Similarly, in the PALM concept, the number of photons *n* in 1/√(*n*) is the number of those photons that are really detected at the coordinate-giving pixelated detector (camera), i.e. that really contribute to revealing the position of the emitting molecule. In other words, in both concepts, to attain a high coordinate precision, one needs *many* photons that act. The references shown are to [8, 9, 49, 51, 52] described in the text

The enabling element being a transition between two states, the two states need not be fluorescence "on"/"off": they could also be a pair of states "A" and "B", like "absorption/non-absorption", "scattering/non-scattering", "spin up/spin down", "bound/ unbound" (as in the method called PAINT [53]), etc.. Perhaps one can also imagine a super-resolution *absorption* microscope or a super-resolution *scattering* microscope, if one identifies the right states.

The field is progressing rapidly, and some selected highlights have been the demonstration of STED nanoscopy in 3D [54, 55] (compare Fig. 1.16),

**Fig. 1.16** 3D STED microscopy for simultaneously increasing the resolution in the focal plane and along the optic axis. (**Left Top**) Schematic setup. The STED power is distributed between the two phase plates (Plat and P3D) by using a combination of a λ/2 plate and a polarizing beam splitter (PBS). The second PBS recombines the two beams incoherently. The excitation (Exc) and STED beams are overlaid by a dichroic mirror (DM). A λ/4 plate ensures the circular polarization of all beams prior to being focused by the objective lens (OL). The fluorescence signal (Fl) is collected by the same lens. (**Left Bottom**) Focal intensity distributions of excitation and STED beams measured using gold beads in reflectance mode. From left to right: Excitation, STED beam from Plat arm resulting in the focal deexcitation pattern STEDlat, STED beam from P3D arm yielding STED3D, incoherent combination of both arms (30% STEDlat/70% STED3D

power distribution). The latter distribution results in an efficient coverage of the volume around the focal point. Scale bars 500 nm. (Right) (**a**–**d**) 3D nanoscale image of dilute distribution of 20 nm diameter fluorescent spheres on glass. xy sections of (**a**) confocal and (**b**) STED. (**c**) Confocal and (**d**) STED xz sections along the dashed blue line indicated in panels (**a**) and (**b**). Individual beads can be easily resolved in the STED images. Comparing panel (**c**) with panel (**d**), note the significant reduction in crosssectional area in the STED xz-image (**e**, **f**) Intensity profile along the (**e**) x and (**f**) z direction for sections indicated by the white arrows in panels (**c**) and (**d**). All presented data is raw data. (**g**) Focal volume reduction relative to confocal focal volume measured using 20 nm fluorescent spheres. The combination of two de-excitation patterns gives a maximal volume reduction factor of 125. Scale bars 1 μm. From [54], reproduced with permission

**Fig. 1.17** isoSTED: Fluorescence microscopy setup with isotropic 3D focal spot. (**a**) Beams for excitation, STEDxy (lateral) and STEDz (axial) are combined using a dichroic mirror (DCSP) and then fed through a beam scanner into a 4Pi unit with two opposing objective lenses (O1 and O2; HCX PL APO 100x, 1.46 NA OIL CORR). The fluorescence light (orange) collected by both lenses backpropagates along the same optical path to the detector, passing through the DCSP and a second dichroic mirror (DCLP). The pivot plane (PP) of all scanning beams is conjugated to the entrance pupils of the objective lenses. The incoming beams are divided by a polarizing beam-splitter (PBS) and coherently superimposed at both lenses' common focal plane inside the sample (S). A piezo-driven mirror (MP) controls the difference in pathlength between both cavity arms and thereby the 4Pi phases of all beams. The polarization state of STEDxy and STEDz is adjusted by two half-wave retarder plates (H1 and H2). The excitation beam and the STED beams for lateral (STEDxy, imprinted with a circular phase ramp (PM)) and axial (STEDz) fluorescent spot compression are polarized under α = 45° with respect to the perpendicular direction (n) to the splitting plane (p) of the polarizing beam-splitter. STEDxy and STEDz are polarized orthogonal to each other. (**b**) Calculated focal intensity distributions and formation of the STED PSF with respective wavelengths, λ, and 4Pi phases ϕ. (**c**) Isotropic effective focal spot (PSF) on the nanoscale. (**Left**) Calculated PSF of a confocal fluorescence microscope and the corresponding spherical PSF of the isoSTED microscope at Im/Is = 15 (NA = 1.4). (**Middle**) Experimental counterpart to (left) as measured with a 21-nm-diameter fluorescence bead. The FWHM of the confocal setup (1.46 NA) is 165 nm in the lateral and 405 nm in the axial direction. Switching on the STED pattern shown in **b** leads to a largely spherical main focal fluorescence spot. (**Right**) Gaussian fits through the lateral and axial profiles of the focal spot yield indicated FWHM, corresponding to an isotropic far-field optical 3D resolution of λ/16. Baselines are marked with colored circles. Scale bars, 250 nm. From [55], reproduced with permission

**Fig. 1.18** Lightsheet (LS)-RESOLFT concept. A living specimen expressing RSFPs is grown on a coverslip mounted on a movable platform. The specimen is illuminated (here in y direction) perpendicular to the detection axis (z). Only in a thin diffraction-limited section, RSFPs are switched from their initial off state (unfilled dots) to the on state (white dots) by an activating LS. None of the fluorophores outside the illuminated volume is affected by the laser light. An LS featuring a central zero-intensity plane switches off the activated RSFPs above and below the detection focal plane (x–y). For negative-switching RSFPs, this is a competing process to fluorescence (green

at millisecond imaging times for ultrafast dynamics in small fields of view [56], the demonstration of a RESOLFT strategy to neutralize the diffraction limit of light-sheet fluorescence microscopy [57] (compare Fig. 1.18), efficient STED nanoscopy with quantum dots as fluorescent labels [58]; highest levels of 3D isotropic resolution (<30 nm in *x*,*y*,*z* simultaneously) with a new, stable design of 4Pi-based

dots and arrows). For off-switching light intensities above the threshold of the RSFPs, only fluorophores within a slice of subdiffraction thickness remain activated. These can be read out by a third LS and contribute to the LS-RESOLFT image. The platform is displaced to the next position in the scanning sequence for another illumination cycle. The lower row shows measured y–z crosssections of the applied LSs visualized in fluorescent medium. The sheets impinge on the coverslip at an angle of 30°. (Scale bar, 100 μm.) From [57], reproduced with permission

isoSTED [59] (compare Fig. 1.17 for more details on the isoSTED approach), and the several-thousand fold "massive" parallelization of RESOLFT and even STED without resolution compromises for faster imaging of large fields [60, 61] as well as extended multicolor capabilities [62] and nanoscopy in living animals (Fig. 1.19).

**Fig. 1.19** Super-resolution microscopy *in vivo*: mouse and fruit fly nanoscopy. (**a**) STED nanoscopy of a mouse with enhanced yellow fluorescent protein-labelled neurons. Shown are dendritic and axonal details in the molecular layer of the somatosensory cortex of a living, anesthetized mouse. Optical access to the brain cortex was enabled by a cover glass-sealed cranial window. Top panel: image of a neuron. Bottom panel: STED time-lapse recording of spine morphology dynamics. Scale bars: 1 μm. (**b**) STED imaging of synaptic protein distribution. Example: PSD95, the abundant scaffold protein at the postsynaptic membrane, which organizes numerous other synaptic proteins. (Left) The cartoons show the *in-vivo* labeling of endogenous PSD95-HaloTag, a self-labeling enzymatic protein tag, with organic fluorophores. (Right) Depending on the orientation of the individual spine head imaged with respect to the focal plane, the intricate spatial

organization of PSD95 at the synapse is revealed in the STED mode. Scale bars: 500 nm. (**c**) RESOLFT imaging of the microtubule cytoskeleton of intact, living Drosophila melanogaster larvae. A second instar larva ubiquitously expressing a fusion protein composed of the reversibly switchable fluorescent protein (RSFP) rsEGFP2 fused to α-tubulin was placed under a coverslip and imaged through the intact cuticle. Left: confocal overview. Middle and right: magnifications of the area indicated by the corresponding square. Shown are comparisons of confocal and RESOLFT recordings (separated by a dashed line), exemplifying the difference in resolution. Scale bars: 10 μm, 1 μm and 500 nm (from left to right). Part (**a**) is adapted from [21]. Reprinted with permission from AAAS. Part (**c**) is adapted with permission from [63], CC-BY 3.0. Parts (**a**) and (**c**) reproduced with permission from [64], part (**b**) from [65]

#### **1.2 Recent Developments: Nanoscopy at the MINimum**

Improvements to STED microscopy have substantially expanded its capabilities for a growing diversity of cell-biological applications [64]. Concretely, recent adaptive scanning strategies [66–68] have proven key to reducing the overall light dose applied to the sample. These conceptual additions to STED/RESOLFT imaging reduce photobleaching [69] and are advantageous for live-cell imaging. Thus, they have allowed the resolution of STED microscopy to be pushed even closer to the <20 nm regime for organic fluorophores and for routine users, under realistic cell-imaging conditions. The first example of these approaches is MINFIELD [67], which provides major signal increases and prolonged acquisitions by restricting imaging to regions below the diffraction limit (Fig. 1.20a). MINFIELD STED microscopy avoids the exposure of the fluorophore to excess intensities of the 'doughnut' and, more generally, to the maxima of the light intensity distribution used for on/offswitching. Rapid and repetitive MINFIELD recording is likely to be the approach of choice for investigating small spatial domains, such as the synapse. Moreover, MINFIELD STED microscopy will allow fast dynamical nanoscale processes to be captured on millisecond timescales and beyond [56, 67].

DyMIN is a related recording strategy [68] which minimizes exposure to unduly high intensities except at scanning steps where these intensities are strictly required for resolving features (Fig. 1.20b–d). Like MINFIELD, the DyMIN approach achieves dose reductions by up to orders of magnitude, particularly for relatively sparse fluorophore distributions. Initially demonstrated for STED immunofluorescence imaging, both MINFIELD and DyMIN will be explored for other classes of fluorophores, including the inherently lower-light-level RESOLFT nanoscopy variants with genetically encoded fluorescent proteins. The recently described organic switchable photochromic compounds [70] will also be further developed as attractive alternatives in this regard. The synergistic combination of two separate fluorophore state transitions in a recent concept termed multiple off-state transitions (MOST) for nanoscopy [66] has also enabled many more image frames to be captured, at much improved contrasts and with lower STED light dose at a given resolution than for standard STED. Approaches to directly count molecules with STED have also been developed [71], and can be used to quantify the composition of suitably labeled molecular clusters.

Of all the nanoscopy or super-resolution advances of the last decade, the recently described MINFLUX concept [72] stands out, because it contains a radically new idea. Whereas in PALM/ STORM the localization of a molecule is based on maximizing the number of detected fluorescence photons on a camera, which is inevitably limited by bleaching, in MINFLUX (Figs. 1.21 and 1.22) the molecule is localized by making it coincide with the intensity zero of a doughnutshaped excitation beam. The excitation beam is scanned across the molecule and the fluorescence is typically recorded as in a confocal microscope. The position of the molecule is ultimately identical with the position of the doughnut at which fluorescence emission is minimal (see Fig. 1.21). By fundamentally reducing the number of detected photons required for nanometer-precise localization, MINFLUX has opened the door to low-light level optical analysis of tiny objects at true molecular scale resolution (1–5 nm). With MINFLUX, lens-based fluorescence microscopy has thus reached the ultimate resolution limit: the size of the fluorescent molecule itself. Moreover, the resolution is attained at relatively high speed, at least 10 times faster than in PALM/STORM.

While the experimental developments of the MINFLUX concept are still in the beginnings, it is worth commenting on the fundamental advantage over localization based on the emitted fluorescence alone. As discussed in [72, 73], in PALM/STORM, as in camera-based tracking applications, a molecule's position is inferred from the maximum of its fluorescence diffraction pattern (back-projected into sample space). The precision of such camera-based localization ideally reaches σcam ≥ σPSF/√N, with σPSF being the standard deviation of the pattern and *N* the num-

**Fig. 1.20** Concepts with improved sample-responsive implementation of the on-off switching. (**a**) The MINFIELD concept: Lower local de-excitation intensities in STED nanoscopy for image sizes below the diffraction limit. (Left) In STED imaging with pulsed lasers, the ability of a fluorophore to emit fluorescence decreases nearly exponentially with the intensity of the beam de-exciting the fluorophore by stimulated emission. Is can be defined as the intensity at which the fluorescence signal is reduced by 50%. Fluorophores delivering higher signal are defined as on, whereas those with smaller signal are defined as off. (Middle) The STED beam is shaped to exhibit a central intensity zero in the focal region (i.e., a doughnut), so that (Right) molecules can show fluorescence only if they are located in a small area in the doughnut center. This area decreases with increasing total doughnut intensity. Due to its diffraction-limited nature, the intensity distribution of the STED focal beam extends over more than half of the STEDbeam wavelength and exhibits strong intensity maxima, significantly contributing to photobleaching. By reducing the size of the image field to an area below the diffraction limit, where the STED beam intensity is more moderate (i.e., around the doughnut minimum; compare image area indicated in the Middle), one can reduce the irradiation intensities in the area of interest, inducing lower photobleaching and allowing the acquisition of more fluorescence signal at higher resolution. Scale bar: 200 nm. (**b**) DyMIN (Dynamic

Intensity MINimum) STED imaging. (Left) Concept illustrated for two fluorophores spaced less than the diffraction limit. Signal is probed at each position, starting with a diffraction-limited probing step (PSTED = 0, Top), followed by probing at higher resolution (PSTED > 0). At any step, if no signal indicates the presence of a fluorophore, the scan advances to the next position without applying more STED light to probe at higher resolution. For signal above a threshold (e.g., T1, Upper Middle), the resolution is increased in steps (Lower Middle), with decisions taken based on the presence of signal. This is continued up to a final step of Pmax (full resolution where required). For the highest-resolution steps, directly at the fluorophore(s), the probed region itself is located at the minimum of the STED intensity profile (Bottom). (**c**) Dual-color isotropic nanoscopy of nuclear pore components and lamina with DyMIN STED: Confocal and 3D DyMIN STED recordings of nuclear pore complexes (shown in green) and lamina (red). Scale bars: 500 nm. (**d**) DyMIN STED imaging of DNA origami structures with fluorophore assemblies. The DNA origami-based nanorulers with nominally 30-nm separation (10-nm gap) consisted of two groups of ~15 ATTO647N fluorophores, on average, each. Accounting for the known ~20-nm extent of the fluorophore groups (compare schematic), the widths of the Gaussians imply an effective PSF of ~17 nm (FWHM). Scale bars: 200 nm. Figures reproduced with permission from [67] (**a**) and [68] (**b**–**d**)

**Fig. 1.21** Principles of MINFLUX, a concept for localizing photon emitters in space, illustrated in a single dimension (*x*) by using a standing light wave of wavelength *λ*. (**a**) The unknown position *x*m of a fluorescent molecule is determined by translating the standing wave, such that one of its intensity zeros travels from *x* = −*L*/2 to *L*/2, with *x*m somewhere in between. (**b**) Because the molecular fluorescence f(*x*) becomes zero at *x*m, solving f(*x*m) = 0 yields the molecular position *x*m. Equivalently, the emitter can also be located by exposing the molecules to only two intensity values belonging to functions *I*0(*x*) and *I*1(*x*) that are fixed in space having zeros at *x* = −L/2 and L/2, respectively. Establishing the emitter position can be performed in parallel with another zero, by targeting molecules further away than λ/2 from the first one. (**c**) Localization considering the statistics of fluorescence photon detection: Success probability *p*0(x) for various beam separations *L* are shown as listed in the legend for *λ* = 640 nm. The fluorescence photon detection distribution *P*(*n*0|*N* = *n*0 + *n*1 = 100) conditioned to a total of 100 photons is plotted along the right vertical axis of normalized detections *n*0/*N* for each *L*. The distribution of detections is mapped into the position axis *x* through the corresponding *p*0(*x*,*L*) function (gray arrows), delivering the localization distribution P( *x* Ù <sup>m</sup> |*N* = 100). The position estimator distribution contracts as the distance L is reduced. (**d**) Cramér-Rao bound (CRB) for each L. Precision is maximal halfway between the two points where the zeros are placed. For *L* = 50 nm, detecting just 100 photons yields a precision of 1.3 nm. Figure reproduced from [72]. Reprinted with permission from AAAS

**Fig. 1.22** The MINFLUX concept: molecular resolution in fluorescence nanoscopy. (**a**) Implementation of MINFLUX in 2D fluorescence imaging and tracking. (*Top*) Diagrams of the positions of the doughnut in the focal plane and resulting fluorescence photon counts. (*Bottom*) Basic application modalities of MINFLUX. (*Left*) Nanoscopy: A nanoscale object features molecules whose fluorescence can be switched on and off, such that only one of the molecules is on within the detection range. They are distinguished by abrupt changes in the ratios between the different *n*0,1,2,3 or by intermissions in emission. (*Middle*) Nanometer-scale (short-range) tracking: The same procedure can be applied to a single emitter that moves within the localization region of size *L*. As the emitter moves, different fluorescence ratios are observed that allow the localization. (*Right*) Micron-scale (longrange) tracking: If the emitter leaves the initial *L*-sized field of view, the triangular set of positions of the doughnut zeros is (iteratively) displaced to the last estimated position of the molecule. By keeping it around *r***0** by means of a feedback loop, photon emission is expected to be minimal for *n*0 and balanced between *n*1, *n*2, and *n*3, as shown. (**b**) With MINFLUX nanoscopy one can, for the first time, separate molecules optically which are only a few nanometers apart from each other. On the left, a schematic of the molecules is presented. Whereas the ultra-

high resolution PALM/STORM microscopy at the same molecular brightness (*Right*) delivers a diffuse image of the molecules (here in a simulation under ideal technical conditions), the position of the individual molecules can be easily discerned with the practically realized MINFLUX (*middle*). (**c**) Many much faster movements can be followed than is possible with STED or PALM/ STORM microscopy. Left: Movement pattern of 30S ribosomes (colored) in an *E. coli* bacterium (gray scale). Right: Movement pattern of a single 30S ribosome (green) shown enlarged. (**d**) MINFLUX tracking of rapid movements of a custom-designed DNA origami. (*Top left*) Diagram of the DNA origami construct with a single ATTO 647N fluorophore attached at the center of the bridge (10 nm from the origami base). By design, the emitter can move on a half-circle above the origami and is thus ideally restricted to a 1D movement. (*Bottom left*) Histogram of 6118 localizations of the sample with δt = 400 μs time resolution and a 1.5 × 1.5-nm binning. The predominant motion is along a single direction. (*Right, Upper*) A 300-ms excerpt of the photon count trace (time resolution δt = 400 μs per localization). The color coding corresponds to the zero positions shown to the left. (*Right, Lower*) Mean-subtracted trajectory. Figure reproduced from [72] (**a**) (reprinted with permission from AAAS) and [73] (**d**)

ber of fluorescence photons making up the pattern [74]. Note that σcam is thus clearly bounded by the finite fluorescence emission rate, which for currently used fluorophores rarely allows more than a few hundred photon detections per millisecond (<1 MHz). Moreover, emission is frequently interrupted and eventually ceases due to blinking and bleaching. This also keeps the photon emission rate as the limiting factor for the obtainable spatio-temporal resolution. As a result, state-of-the-art single-molecule tracking performance long remained in the tens of nanometer per several tens of millisecond range. Drawing on the basic ideas of the coordinate determination employed in STED/RESOLFT microscopy, the MINFLUX concept addresses these fundamental limitations [72]. By localizing individual emitters with an excitation beam featuring an intensity minimum that is spatially precisely controlled, MINFLUX takes advantage of coordinate targeting for single-molecule localization. The basic steps are illustrated for one spatial dimension in Fig. 1.21. In a typical two-dimensional MINFLUX implementation, the position of a molecule is obtained by placing the minimum of a doughnut-shaped excitation beam at a known set of spatial coordinates in the molecule's proximity. These coordinates are within a range *L* in which the molecule is anticipated (Fig. 1.22a). Probing the number of detected photons for each doughnut minimum coordinate yields the molecular position. It is the position at which the doughnut would produce minimal emission, if the excitation intensity minimum were targeted to it directly. As the intensity minimum is ideally a zero, it is the point at which emission is ideally absent. The precision of the position estimate increases with the square root of the total number of detected photons and, more importantly, by decreasing the range *L*, the spatial scale inserted from the outside into the experiment. For small ranges *L* for which the intensity minimum is approximated by a quadratic function, the localization precision does not depend on any wavelength and, for the case of no background and perfect doughnut control, the precision σMINFLUX simply scales with *L*/√*N* at the center of the investigated range. In other words, the better the coordinates of the excitation minimum match the position of the molecule, the fewer fluorescence detections are needed to reach a given precision. In the conceptual limit where the excitation minimum coincides with the position of the emitter, i.e. *L* = 0, the emitter position is rendered by vanishing fluorescence detection. This is contrary to conventional centroid-based localization where precision improvements are tightly bound to having increasingly larger numbers of detected photons.

The already demonstrated tracking of fluorophores with substantially sub-millisecond position sampling (Fig. 1.22c) is only the beginning in a quest for highest spatiotemporal capabilities (compare data in Fig. 1.22d) [73]. The inherent confocality should also provide a critical advantage when considering imaging in more dense and three-dimensional specimens, such as brain slices and *in-vivo* imaging scenarios. With further development of other aspects, including field-of-view enlargement, etc., MINFLUX is bound to transform the limits of what can be observed in cells and molecular assemblies with light. This should most probably impact cell and neurobiology and possibly also structural biology. Moreover, it should be a great tool for studying molecular interactions and intra-macromolecular dynamics in a range that has not been accessible so far.

**Acknowledgements** Substantial portions of the discussion in this chapter have been only slightly modified from the published text of the Nobel Lecture, as delivered by Stefan W. Hell in Stockholm on December 8, 2014 (Copyright The Nobel Foundation, which has granted permission for reuse of the materials.)

#### **References**


using reversibly photoswitchable proteins. Proc Natl Acad Sci U S A. 2005;102(49):17565–9.


**Steffen J. Sahl** read Natural Sciences at Christ's College Cambridge, earning Honours BA and MSci degrees in experimental and theoretical physics from Cambridge University in 2007 (MA Cantab. 2010). He received his PhD in physics from the University of Heidelberg in 2010 for work performed in the laboratory of Stefan W. Hell at the Max Planck Institute for Biophysical Chemistry, Göttingen. Following postdoctoral work with W. E. Moerner at Stanford University (2011–2014), Sahl returned to the MPI in Göttingen to expand his research on the biophysical analysis of protein aggregation and cellular protein quality control using fluorescence single-molecule methods and nanoscopy. The further development of fluorescence nanoscopy methods remains one of his central research interests, with more than 30 peer-reviewed journal articles authored and co-authored to date.

**Stefan W. Hell** studied physics in Heidelberg, obtaining his diploma in 1987 and his PhD in 1990. From 1997, Hell was a group leader at the Max Planck Institute for Biophysical Chemistry, Göttingen, where he was named a director in 2002. From 2016, Hell has also served as a director at the MPI for Medical Research in Heidelberg. Among numerous awards for his pioneering work on breaking the diffraction resolution barrier of optical microscopy, he received the Gottfried Wilhelm Leibniz Prize in 2008, the Otto Hahn Prize in Physics in 2009, and shared the 2014 Kavli Prize in Nanoscience and the 2014 Nobel Prize in Chemistry.

**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

**Part II**

**Retinal Imaging and Image Guided Retina Treatment**

# **Scanning Laser Ophthalmoscopy (SLO)**

Jörg Fischer, Tilman Otto, François Delori, Lucia Pace, and Giovanni Staurenghi

#### **2.1 Introduction and Technology**

#### **2.1.1 History**

20 years after Theodore Maiman announced in New York in a press conference the invention of the laser [1], the first laser scanning ophthalmoscope, was demonstrated in a paper by Webb, Hughes and Pomerantzeff [2]. They named their instrument "flying spot TV ophthalmoscope" since firstly, they used a scanned beam of an ArKr-laser (resp. HeNe-laser for the red wavelength range) to generate a fast "flying" laser spot on the subjects retina and secondly, the amplified output voltage of the photomultiplier, which was used to detect the back-scattered light from the retina, was directly connected to the input channel of a television tube monitor. The optical setup of this device is reprinted in Fig. 2.1. The expanded beam of a gas laser is scanned by horizontal and vertical scanning mirrors and deflected

J. Fischer (\*) · T. Otto Heidelberg Engineering GmbH, Heidelberg, Germany

F. Delori Schepens Eye Research Institute, Harvard Medical School, Boston, MA, USA

L. Pace · G. Staurenghi

Department of Biomedical and Clinical Sciences "Luigi Sacco", Sacco Hospital, University of Milan, Milano, Italy

by a half-silver coated turning mirror (TM); all components are positioned in conjugate planes to the entrance pupil of the eye. The backscattered light is coupled by the partially transmitting turning mirror into the detection arm with several spatial filters to avoid artifacts from reflections at the final lens and at the cornea.

In the following years Robert Webb and his team, but also the Heidelberg group of Josef Bille continuously improved and refined the performance of the then re-baptized Scanning Laser Ophthalmoscope (SLO) [3, 4], including the use of frame grabber boards allowing for conversion to digital images. Image contrast was drastically improved by confocal detection eliminating scattered light originating from outside the focal volume [5, 6]. Laser Scanning Tomography (SLT) was introduced, were a stack of 2D SLO images was acquired with each frame positioned at a different, equally spaced focal plane and from the 3D data stack a tomography image was calculated. (see [7, 8] and Sect. 2.2.2 for details).

Together with ophthalmologists the technical groups started to evaluate the clinical value of the new imaging technique in first clinical studies [9]. A perimeter was integrated into the SLO in order to determine scotoma maps of patients [10, 11], and a study on macular holes and other macular disease was accomplished at the department of ophthalmology in San Diego in cooperation with the Heidelberg team [6].

**2**

<sup>©</sup> The Author(s) 2019 35 J. F. Bille (ed.), *High Resolution Imaging in Microscopy and Ophthalmology*, https://doi.org/10.1007/978-3-030-16638-0\_2

**Fig. 2.1** Adapted from reference [2]: Optical set-up of the Boston "Flying Spot TV Ophthalmoscope" (upper part) and block diagram of electronics (lower part)

Already in 1989 Josef Bille at al. proposed and demonstrated the use of adaptive optics in order to improve the axial resolution of the confocal SLO [12].

#### **2.1.2 Modern Confocal SLO**

In Fig. 2.2 a set-up of an SLO is represented, which is reduced for simplicity to its key components, which are integral parts of every SLO system. The emission of a laser diode is collimated by a lens (CL) and the beam passes through an out-coupling beam splitter (BSP) before it enters the XY-scanning unit (XY). The scanning unit deflects the laser beam in two dimensions and thus generates the raster pattern on the retina. The scan pupil in the plane of XY is relayed by imaging optics onto the entrance pupil of the examined eye (L1, L2). The anterior segment of the eye (cornea and lens) focuses the laser beam onto the retina. The optics must be adjustable in order to provide compensation for myopic resp. hyperopic eyes. Contrary to Fig. 2.1, the signal light (backscattered light or emitted fluorescence) travels the same optical path back and is "descanned" to a stationary beam. It is separated from the incident laser beam by means of the beam splitter (BSP) and deflected into the detection arm of the system. The signal light is then focused by a lens (FL) onto a pinhole (PH), which serves as confocal aperture and is located at a plane conjugate to the retina and to the emitting laser diode. The light is finally detected (APD) and converted to an electrical signal, which is then amplified, digitized and transferred to the computer, where the digital image is reconstructed, preprocessed and displayed on the monitor. Usually the synchronization signals frame, line, and pixel clock are derived from the scanning system, where on one hand the line clock of the fast scanning direction (here horizontal X-axis) is divided into an equally spaced pixel clock and on the other hand the frame clock

**Fig. 2.2** basic set-up of a SLO. The core components are: a laser (here laser diode LD), an XY scanning unit (defining the scan pupil), optics to relay the scan pupil onto the entrance pupil of the eye, a beam splitter (BSP) to separate the backscattered signal from the incident laser path, the

pinhole (PH), and a high sensitivity avalanche photodiode detector (APD). The green dotted arrow indicates the scan sweep. An out of focus beam is also shown (yellow interrupted line) to illustrate the strong attenuation of such beam at the pinhole

is generated to trigger the scan ramp of the slower scan axis (here vertical Y-axis). Due to the confocal set-up, back-scattered light, which originates from structures anterior and posterior to the focal plane, is blocked efficiently by the pinhole aperture (PH). For fluorescence imaging a barrier filter needs to be inserted in the detection arm (typically between BSP and FL, not shown in Fig. 2.2) in order to block the reflected excitation light.

#### **2.1.3 SLO Core Components**

In the following the technology of core components is discussed:

#### **2.1.3.1 Laser Source**

The early SLOs incorporated gas lasers with their superior beam profile and their stable, continuous wave (cw) laser emission. For reflectance imaging usually the red HeNe laser line (helium neon laser) was used, whereas for fluorescence applications the turquoise 488 nm laser line of the Argon laser (Ar+ laser) became very popular. This wavelength matches almost perfectly the absorption maximum of sodium fluorescein, a standard dye, which was used in clinics already since the 1960s for fluorescein angiography (FA) with fundus cameras [13].

The Heidelberg Retina Tomograph (HRT) was the first SLO, which used instead of a bulky and little efficient gas laser a compact and less expensive red laser diode at 670 nm. Whereas in the red and near infrared wavelength range these laser diodes were available already in the early 1990s, for the fluorescein angiography the replacement of the gas lasers took more time. In Fig. 2.3 three generations of blue lasers emitting at 488 nm, which were used in Heidelberg Retina Angiography systems (HRA), are displayed.

#### **2.1.3.2 Scan Unit**

The biggest challenge of a fast 2D scanning system is the high line rate of 8 kHz or more, which is required for the fast scanning direction. In the following it is assumed, that the fast scan axis of the 2D scan pattern is the horizontal X-direction, and the slow axis refers to the vertical Y-scan. Such a fast line scan rate is required in order to achieve frame rates of >15 Hz while maintaining still a sufficient high density for square pixels, i.e. pixels with same separation in X- and Y-direction. In addition to the high speed, also the optical scan angle and the size of the scan pupil, i.e. the mirror size, are crucial parame-

**Fig. 2.3** laser devices emitting at 488 nm for FA and AF imaging in Heidelberg Engineering systems: (**a**) Ar+ laser from Uniphase Inc. used in the HRA "classic", (**b**) "Sapphire" laser, an optically pumped solid

state laser (OPSL) from Coherent Inc. (HRA2 and early Spectralis) and (**c**) a laser diode in a TO56 (Ø5.6mm) package (Spectralis)

ters, which need to be carefully balanced. A lower primary scan angle can be compensated by appropriate magnification optics, which will, however, at the same time reduce the diameter of the pupil area defining the solid angle under which back scattering or fluorescence emission can be detected.

These demanding specifications can be met by resonant scanners, where the mirror is mounted on a beryllium torsion bar with a counter weight at the other end. The total system of mirror, torsion bar and counter weight has an intrinsic resonance frequency with high Q-value of half the line rate and is excited by a phase locked loop (PLL) circuit to the fast sinusoidal scanner oscillation.

A common alternative to resonant scanners are polygon scanners, where a polygon-shaped mirror is rotated by means of a fast DC motor at up to 60,000 rpm. For a polygon mirror with eight mirror facets, this would correspond to a line frequency of 8 kHz. Again, the size of the scan pupil and the required scan angle limit the number of facets and therefore also the scan rate.

For the slow axis (vertical direction) in general galvanometric scanners are used, which are usually controlled in a closed loop with a sawtooth ramp.

#### **2.1.3.3 Beam Splitter**

Different beam splitter optics (plates or cubes) have been used for separating the back scattered signal light from the incident laser beam path. For fluorescence systems (angiography and autofluorescence) the use of dichroic mirrors is the optimum choice. The dielectric coating of these mirrors is usually designed in a way that the beam splitter plate has a high transmission for the excitation laser wavelength, but shows a strong reflectivity for the "red-shifted" fluorescence emission.

For reflectance imaging, a splitting ratio of 20:80 is usually a good choice: a transmission of 20% is in many cases sufficient, since the maximum laser power, which can be applied to the eye, is limited anyway by laser safety requirements. On the other hand, it is of course desirable to couple as much as possible of the back scattered signal light into the detection branch.

Another option is the use of a polarizing beam splitter: the laser beam is p-polarized and therefore passes through the polarizing beam splitter to a very high percentage. After the beam splitter, a quarter wave plate is inserted, which rotates the polarization due to the double pass (out-going and back scattering beams) by 90° in a way, that the signal light is now s-polarized and thus reflected by the polarizing beam splitter into the detection unit. It is important to note, that the birefringent structures within the eye (e.g. cornea, retinal nerve fiber layer) can cause an inhomogeneous illumination of the image, since the combined action of the quarter wave plate and the birefringent tissue will lead to an imperfect 90° rotation of the polarization.

#### **2.1.3.4 Imaging Optics**

Optics are required to relay the scan pupil to the entrance pupil of the eye. In many cases a telecentric 4f design is chosen, as shown schematically in Fig. 2.2. The advantage of such a design is, that by adjusting the distance between the two lens groups (L1 and L2), the convergence or divergence of the laser beam at the scan pupil can be adjusted to correct for the refraction error of the eye, without changing the magnification of the scan angle at the entrance pupil.

By using lens groups with different effective focal lengths, the primary scan angle can be magnified or reduced and the beam diameter at the eye pupil is correspondingly decreased resp. increased. Especially for the design of wide field imaging optics, it is important, to integrate the lens data of wide field model eye into the optics design file in order to achieve over the complete scan field a sharp fundus image.

#### **2.1.3.5 Detectors**

Whereas in early SLO systems photomultiplier tubes (PMT) were used because of their at that time unique sensitivity, nowadays semiconductor based detectors as avalanche photodiodes (APD) and silicon photomultiplier (SiPM) are the first choice.

APDs consist of a semiconductor p-n junction, where the incident photons create free electrons in the absorption zone. These carriers are sucked to the multiplication zone, where a high reverse voltage leads to their avalanche-like multiplication due to impact ionization. Typically a gain of more than 100 can be achieved, and this gain is proportional to the applied reverse voltage.

APDs can be operated as well in the Geiger mode for single photon counting. In these single photon avalanche photodiodes (SAPD) the applied reverse voltage is above the breakdown voltage, and an initial charge carrier created by a single photon is extremely multiplied up to currents in the range of a milliampere. The leading edge of this current is used to trigger a photon counter. During the high current, the bias voltage drops below breakdown voltage and the device is blind for the detection of new photons. This effect limits the maximum count rate of the SAPDs.

Silicon photomultipliers (SiPM) consist of an array of APDs operated in the Geiger mode. The dimension of such an array is typically in the order of a few millimeters. The dimension of each single SAPD cell is in the range of 10 … 100 μm, i.e. the array typically consists of several 1000 elements. The output channels of the individual cells are connected in one common parallel read-out element. Although each element is operated in the digital Geiger mode, the combined output of the complete device yields an analogue output, which is proportional to the incident photon flux until the saturation level is reached.

#### **2.1.4 Resolution of the SLO**

#### **2.1.4.1 Limitations and Numerical Aperture (NA) of the Eye**

The optical resolution of the scanning laser ophthalmoscope is limited by the anatomy of the human eye itself. As in any conventional light microscope the minimum spot size of the focused laser in the object plane is limited by diffraction.

The numerical aperture NA of an optical system characterizes the range of angles (half cone angle θ, i.e. measured against optical axis) over which the system can accept or emit light. It also is a measure of the optical resolution in a diffraction limited system. For the human eye it is defined by:

$$\begin{split} \mathbf{NA}\_{\text{eye}} &= n\_{\text{vir}\_{\cdot}} \cdot \sin \left( \Theta \right) = n\_{\text{vir}\_{\cdot}} \cdot \sin \left( \frac{\mathbf{D}}{2 \cdot f\_{p}} \right) \\ &= n\_{\text{vir}\_{\cdot}} \cdot \frac{\mathbf{D}}{2 \cdot f\_{p}} = \frac{\mathbf{D}}{\mathbf{2} \cdot f\_{\text{eye}}} \end{split} \tag{2.1}$$

where *n* = 1.336 is the refractive index of the vitreous, D the pupil diameter, *fp* the posterior focal length of the emmetropic eye (*fp* = 22.3 mm), and *feye* the anterior focal length, which determines the lateral scaling parameters of the retina. The error due to the small angle approximation in Eq. (2.1) is less than 1.25%.

For undilated pupils the maximum NAeye is about 0.09 (D = 3 mm) and it can be increased by a factor of 2–3 by dilating the pupil with a drug to D = 6 − 8 mm. However, due to the limited optical quality of the eye and due to the strong increase of the optical aberrations in the periphery, the distortions of the wave front in contrary result in a larger focal volume on the retina and thus decrease the optical resolution compared to undilated pupils [14, 15]. In order to exploit the full diffraction limited resolution for dilated pupils, an adaptive optical (AO) element must be used to compensate the distortions of the optical wave front of the individual eye. With this concept, the lateral resolution can be increased by a factor of 2–3 and the axial resolution even by a factor of 4–9 compared to undilated pupils (see also Chaps. 16–18). The following considerations are only valid for eyes without optical aberrations, i.e. either for undilated eyes with sufficient optical quality, or for dilated pupils with AO compensation.

In order to calculate the lateral and axial extension of the focus spot, the light propagation integral of the pupil function needs to be solved. In the following the results for two different approaches are summarized: Fraunhofer diffraction at a circular aperture and propagation of a Gaussian beam.

#### **2.1.4.2 Fraunhofer Diffraction at a Circular Aperture**

Fraunhofer diffraction assumes an incident plane wave (e.g. a collimated laser beam) with constant amplitude over the complete, circular aperture with diameter D. The latter assumption is usually only a rough approximation, since the laser profile typically is not flat. The radially symmetric intensity distribution I(r,z) in the focal volume can be separated in one expression I(r,z = 0) describing the lateral intensity in the focal plane and a second term I(r = 0,z) describing the axial distribution for r = 0 as a function of axial distance z [16, 17]:

$$\mathbf{I}\left(\mathbf{r}, \mathbf{z} = \mathbf{0}\right) = 4 \frac{J\_{\parallel}^{2}\left(\frac{\boldsymbol{\pi} \cdot \mathbf{D}}{\lambda \cdot f\_{\text{sys}}} \cdot \boldsymbol{r}\right)}{\left(\frac{\boldsymbol{\pi} \cdot \mathbf{D}}{\lambda \cdot f\_{\text{sys}}} \cdot \boldsymbol{r}\right)^{2}} \qquad \qquad (2.2a)$$

and

$$\mathbf{I}\left(\mathbf{r}=\mathbf{0},\mathbf{z}\right) = \frac{\left[\sin\left(\frac{\pi \cdot NA\_{\rm eve}^2}{2\lambda n} \cdot z\right)\right]^2}{\left[\frac{\pi \cdot NA\_{\rm eve}^2}{2\lambda n} \cdot z\right]^2} \qquad (2.2b)$$

with *J*1 in the upper equation referring to the first order Bessel function. The first minimum of this radial symmetric distribution is at:

$$r\_{\rm min} = 0.61 \cdot \frac{\lambda}{N A\_{\rm yw}} = 1.22 \cdot \frac{\lambda \cdot f\_{\rm yw}}{D} \tag{2.3}$$

*rmin* represents the radius of the so-called Airy disk. Under the assumption, that two structures can be separated, when the maximum of the first structure coincides with the minimum of the second structure (Rayleigh criterion), one obtains for the lateral resolution δx*RC*:

$$
\delta \mathbf{x}\_{\mathcal{RC}} = 0.61 \cdot \frac{\mathcal{X}}{N A\_{\text{\tinyç\text{\tinyç\text{\tinyя\text{\tinyя\text{\tinyя\text{\tinyя\text{\tinyя\text{\tinyя\text{\tinyя\text{\tinyя\text{\tinyя\text{\tinyя\text{\tinyя\text{\tinyя\text{\tinyя\text{\tinyя\text{\tinyя\text{\tinyя\text{\tinyя\text{\tinyя\text{\tinyя\text{\tinyя\text{\tinyя\text{\tinyя\text{\tinyя\text{\tinyя\text{\tinyя\text{\tinyя\text{\langle>}}}}}}}}}}}}}} \dots}}} \dots}} \dots \text{\}} \dots \mathbf{\hat{\cdot}}$$

In a similar way from the distribution I(r = 0, z) a value δz*FWHM* for the full width at half maximum can be derived [17]:

$$\delta z\_{FWM} = 0.88 \cdot \frac{\lambda}{n - \sqrt{n^2 - NA\_{\text{ey}}^2}} \qquad (2.5)$$

For small numerical apertures with *NAeye* < 0.5 this yields:

$$
\delta z\_{\rm FWHM} \cong 1.77 \cdot \frac{n\lambda}{N\mathcal{A}\_{\rm sys}^2} \tag{2.6}
$$

#### **2.1.4.3 Beam Waist for Propagating Gaussian Beam**

The propagation theory of Gaussian beams is described in literature and in optics textbooks [18, 19]. It assumes a Gaussian intensity profile, which is the case for many lasers (TEM00 Mode) and also for the output of single mode fibers. However, it normally does not consider the truncation of the Gaussian beam at the finite aperture, which results in a broadening of the focus distribution. The propagation of the non-truncated beam is described by the beam waist *ω*(*z*), which gives the beam radius (intensity drop to 1/e2 ) as function of the axial position z. The beam waist in the focus plane is denoted *ω*0, the Rayleigh length zr refers to the z-distance, where the cross section radius is increased by a factor of √2, i.e. the area is doubled.

$$\alpha \phi(z) = \alpha\_0 \cdot \sqrt{\left(1 + \left(\frac{z}{z\_r}\right)^2\right)}\tag{2.7}$$

with

$$\alpha\_0 = \frac{2\lambda \cdot f\_{\text{cyc}}}{\pi \cdot D} = \frac{\lambda}{\pi \cdot N \mathcal{A}\_{\text{cyc}}} \qquad \qquad (2.8a)$$

and

$$z\_r = \frac{n \cdot \pi \cdot o\_0^2}{\lambda} \tag{2.8b}$$

Dickson [19] investigated the influence of truncation of the Gaussian beam by a circular aperture, which depends on the ratio of the truncating circular aperture with radius *r*A and the Gaussian beam waist in the pupil plane *ωp*:


– for *rA* ≪ *ωp*, the focus waist will approximate the result of the Fraunhofer diffraction pattern.

For SLO examination on an undilated eye the limiting aperture is usually the iris (2–3 mm diameter), which is often in the order of the beam diameter of the laser beam (*rA* ≈ *ωp*), thus a truncation factor of ×1.5 is in many applications justified.

#### **2.1.4.4 Resolution Improvement Due to Confocal Detection**

In order to determine the influence of the confocal aperture on the lateral and axial resolution, the illumination point spread function (PSF, i.e. the 3D intensity distribution of an ideal point source imaged onto the sample) needs to be multiplied with the detection point spread function defined by the image of a point source emitter within the sample onto the pinhole aperture. Thus, in theory the confocal detection could improve the resolution by a factor of 1/√2, however, this enhancement is only achieved when the pinhole size is much smaller than the Airy disk diameter projected into the pinhole plane. Usually, for intensity reasons, this is not the case in commercially available cSLOs.

In Table 2.1 some values for typical focus spot parameters are summarized for different assumptions, wavelengths and pupil diameters:

#### **2.1.5 Example for High Resolution SLO Image**

The following SLO image was acquired with a Spectralis using a high magnification objective (HMO) on a healthy subject. The HMO reduces the optical scan angle by a factor of 2 and in addition the digital pixel density is doubled in both scan directions. This results in a 16× higher pixel density compared to the standard 30° HR scan image (see Fig. 2.4). The laser wavelength was 815 nm and in order to avoid additional aberrations, the pupil was not dilated. The measurement was taken at an eccentricity of about


**Table 2.1** Summary of lateral and axial resolution resp. focal dimensions for non-dilated and dilated pupils, under the assumption that the optical aberration is perfectly compensated e.g. by adaptive optics

**Fig. 2.4** (**a**) High resolution SLO image on a healthy subject at an eccentricity of 10°. The size of the image is 8° × 8° corresponding to about 2.3 mm × 2.3 mm. (**b**)

Detail of the fundus image clearly displaying the cone pattern, without the use of adaptive optics. (**c**) For orientation: 30° image of the same eye

10°. The diameter of the cone cells (inner segments) varies between ≈2 μm and 7 μm with eccentricity increasing from 0° to 5° [20]. Further in the periphery a slow further increase to about 8 μm is reported. The images clearly demonstrate, that structures as small as 6–8 μm can be imaged in the retina, without the use of adaptive optics.

#### **2.2 Laser Scanning Tomography**

Gerhard Zinser and his team developed in 1991 the Heidelberg Retina Tomograph (HRT), a commercial SLO, which was from the beginning dedicated for the diagnosis of glaucoma by assessing the morphology of the optical nerve head.

Together with his partner, Christoph Schoess, he founded Heidelberg Engineering and the HRT was the first product of the company. Both were before employees with Heidelberg Instruments, were Gerhard Zinser led the R&D department for laser scanning ophthalmology and built already in the late 1980s a series of prototypes of a confocal SLO, which was named Heidelberg Laser Tomographic Scanner (LTS). In 1998 (HRTII) and in 2005 (HRT3) the next generations of the HRT with improvements in the data acquisition work flow as well as in the analysis software were released (see Fig. 2.5).

#### **2.2.1 HRTII/HRT3 Acquisition Work Flow**

For laser scanning tomography a set of 2D SLO frames at equally spaced focal positions is acquired. Therefore, between two frames of a series, the focus of the laser is shifted by means of a motorized telescope within the camera head. For the HRTII/HRT3 the data acquisition work flow is as follows:

First the patient is positioned and fixates with the examined eye on a fixation target presented at about 12° nasally, such that the optic nerve head (ONH) appears centered within the 15° × 15° scan field.

**Fig. 2.5** Three generations of confocal SLOs for optic nerve head (ONH) tomography built in Heidelberg. (**a**) Laser tomographic scanner (LTS) form Heidelberg

Instruments (1989), (**b**) Heidelberg Retina Tomograph (HRT "classic"), the first product of Heidelberg Engineering (1991) and (**c**) compact HRT3 (2005)

The user then aligns the camera head in three dimensions, in order to make sure, that the scan pupil, i.e. the pivot point of the scanning laser beam, coincides exactly with the entrance pupil of the eye. The SLO image is displayed on the monitor and the brightness of the image, i.e. the detector sensitivity, is adjusted automatically (auto-brightness) to make sure, that the signal falls over the complete z-scan within the linear range of the detector and no saturation effects corrupt the data. Once the camera is properly aligned, the acquire button is pressed and the acquisition of three consecutive z-scans is started. The first series consists of a stack of 64 images with a focal plane distance of 62.5 μm, i.e. covering an overall z-range of 4 mm in the eye. From this first series, the required z-scan depth for the two consecutive z-scan is calculated (depending on the ONH geometry 2–4 mm), in order to avoid unnecessary long acquisition time. Finally an automatic quality control check is performed, and if o.k. the series are saved on the computer.

#### **2.2.2 HRTII/HRT3 Data Processing**

Each of the three acquired series is corrected for eye movements during the z-scan by laterally matching and shifting each image with respect to the previous image within the scan.

On the basis of the corrected data set a topography and a reflectance image is calculated for the three series. In Fig. 2.6 the working principle of the computation algorithm is displayed. For each lateral pixel (xy-position), the z-profile is analyzed and the xy-values for the reflectance (mean brightness) and topography (surface height of the retina) images are calculated corresponding to the equations in Fig. 2.6. On the basis of the three sets of images finally a mean tomography and a mean reflectance image is calculated (see Fig. 2.7). For better visualization, the topography of the ONH can be displayed also as a 3D surface, as shown in Fig. 2.8.

#### **2.2.3 Contour Line, Reference Plane and Stereometric Parameters**

After data acquisition and calculation of the reflection and topography images, a contour line around the optic disk needs to be defined, similar as it is the case in cup/disk ratio measurements on fundus images. The contour line is used to calculate stereometric parameters as the rim area, rim volume, cup shape measure and others, which describe the shape of the ONH. In addition, these parameters can be combined by means of several discriminant functions, which have been proven in clinical studies to have a high sensitivity and specificity for glaucoma detection [21–24].

The ONH contour needs to be manually defined by the physician and some experience is required to correctly place the line around the optic disk. However, it has been shown in a clinical study [25], that the variability between contour lines drawn by different physicians has only little effect on the parameter data. In addition, the contour line needs to be defined only once for the baseline data and then is automatically transferred to the new image data acquired during follow-up examinations.

The reference plane is calculated on the basis of the defined contour line: the plane is parallel to the peripapillary retinal surface and it is set 50 μm below the retinal surface height at the papillomacular bundle, which is located in the 350–356° section (temporal) of the contour line. The reference line is used to separate between rim and cup area resp. volume. Structures inside the contour line are defined either as neuroretinal rim, if their surface height (maximum of the z-profile) lies above, or as cup, if their surface height lies below the reference plane (Fig. 2.9). This definition of the reference plane allows for objectively comparing follow-up exams to the baseline data.

#### **2.2.4 Analysis of HRT Optic Nerve Head (ONH) Data**

#### **2.2.4.1 ONH Classification Based on Moorfields Regression Analysis**

In 1998 Wollstein et al. from the Moorfields hospital in London proposed a linear regression analysis based on the rim to disc area ratios for the ONH [26]. They imaged in a cross-sectional

**Fig. 2.6** from the volumetric dataset the z-profile for each xy pixel is analyzed in order to calculate the corresponding pixels R(x,y) for the reflectance image respectively T(x,y) for the tomography image

**Fig. 2.7** On the basis of three acquired confocal SLO series finally a mean topography and a mean reflectance image are calculated

**Fig. 2.8** ONH topography displayed as 3D surface for better visualization. Left: ONH classified as normal, Right: ONH with large excavation, classified as borderline

**Fig. 2.9** Left: ONH contour line. Middle: Sketch to illustrate the definition of the reference plane and the cup resp. rim area and volume. Right: Representation of the cup (red) and rim (blue and green) area as overlay to the topography image

#### **Fig. 2.10** Moorfields Regression Analysis: Left: ONH classified "normal", right: ONH classified "outside normal limits"

study 80 normal subjects and calculated from the linear regression curves different confidence intervals (CI) for normal subjects. The data was analyzed for the global ONH as well as separately for six different sectors (see Fig. 2.10). This approach was implemented with an extended reference database in the HRT software, in order to classify acquired ONH data as follows: "within normal limits" for subjects with a rim-disc area ratio within the 99% CI (globally and in all six sectors), "borderline" for subjects, where at least one of the seven parameters was outside the 99% CI but still within the 99.9% CI, and "outside normal limits", when the rim-disc area ratio for at least one of the seven parameters was outside the 99.9% CI of the reference database. Examples for such a classification are shown in Figs. 2.10 and 2.11.

#### **2.2.4.2 Follow-Up and Progression Analysis**

In order to differentiate between a progressing glaucomatous and a stable ONH, follow-up examinations acquired at later time points are compared with the original baseline data. Therefore, first the follow-up images are matched to the baseline images, in order to compensate for head tilts, accommodation differences and other external influences and in a second step the baseline contour line is transferred to the new images to enable the calculation of all the follow-up stereometric parameters.

One possibility to visualize the temporal development is to plot the stereo-metric parameters versus time, with the date of the baseline examination as origin of the time axis. Since the absolute changes and also the sign of the changes vary for different parameters, the temporal change of each stereometric parameter is normalized to the difference of this parameter averaged over a healthy group and over a group of glaucoma patients.

Another possibility to visualize the changes in a time series of ONH topography images was proposed by Chauhan et al. [27]. They assessed by means of statistical methods the significance of changes of clustered super-pixels in the topography images and visualized these changes by colored overlays: red indicating a decrease of surface height (i.e. increase of excavation) and green indicating an increase of surface height.


**Fig. 2.11** Results for Moorfields linear regression analysis displayed for the global ONH and the six segments

#### **2.2.5 Summary SLT for Glaucoma Diagnostics**

Laser scanning tomography was for several years the gold standard for diagnosis and monitoring the progression of glaucoma by assessing the morphology of the optical nerve head. Numerous clinical studies, which have been published in peer-reviewed journals, have shown, that the HRT enables reproducible measurements of the morphology of the optic nerve head [7, 28], has a very high sensitivity and specificity for discriminating patients with early glaucoma from normal subjects [26, 29] and that the success of therapeutic intervention can be reliably assessed by documenting the stagnancy resp. the progression of the disease [27, 30, 31].

Only with the availability of spectral domain OCT systems (see Chap. 3), which provided on one hand a much better axial resolution compared to the confocal SLO and on the other hand a much shorter acquisition time and higher spatial accuracy compared to the previous commercially available time domain OCT device, the demand for laser scanning tomography technology slowly decreased. However, since glaucoma is a slowly progressing disease, where the progression needs to be monitored carefully over years and presently acquired data needs to be compared with baseline and follow-up data acquired years ago, the HRT is still a valuable instrument used on a daily base in clinical practices for monitoring and managing glaucoma patients.

#### **2.3 Widefield Indocyanine Green Angiography (ICGA)**

The advent of widefield imaging has for the first time offered the chance to investigate both, the central and the peripheral retina, in a single examination. A wide visualization of the retinal periphery is necessary for the screening, diagnosis, monitoring, and treatment of many diseases. Early diagnosis of peripheral retinal or choroidal disease could reduce a potential vision loss.

Fluorescein angiography (FA) and indocyanine green angiography (ICGA) are two imaging

**Fig. 2.12** Indocyanine green angiography images of a patient with retinal vein occlusion. From the left to the right: 30° field of view with standard Spectralis objective, 55° WFO, 60° with Ocular Lee-Mainster SLO lens, 102°

UWF, 120° on horizontal axis and 80° on vertical axis with Optos imaging system, and 150° with the Ocular Staurenghi contact lens in combination with Spectralis

**Fig. 2.13** Indocyanine green angiography in a healthy eye showing the 7-standard fields, the wide field and the ultrawide field areas. The red outline highlights the 7-standard fields and the blue circle outline highlights the wide field area

modalities that use a water-soluble dye to visualize retinal and choroidal vasculature [32].

Simultaneous widefield fluorescein and indocyanine green angiography with confocal scanning laser ophthalmoscope can be performed to explore areas of peripheral chorioretinal nonperfusion and neovascularization beyond the range of conventional fundus cameras. Several add-on lenses for confocal SLOs can be used to obtain different fields of view (see Fig. 2.12).

Using widefield angiography important clinical observations have been made under several conditions. One important application of widefield angiography is in patients with diabetic retinopathy. Several studies have demonstrated the association between peripheral retinal nonperfusion and the occurrence of neovascularization and diabetic macular edema [33–35]. The Diabetic Retinopathy Study introduced imaging of diabetic retinopathy in the retinal periphery by obtaining 7-standard fields. By combining these 30-degree images, a montage visualizes about 75° of the retina [36]. Ultra widefield fluorescein angiography images captured 3.9 times more area of retinal non-perfusion, 1.9 times more neovascularization and 3.2 times more retinal surface area than what is seen within the ETDRS standard 7-fields overlay [34] (Fig. 2.13).

Another important use of widefield fluorescein angiography in diabetic retinopathy is the evaluation of peripheral retinal areas of non-perfusion or neovascularization to perform a target laser photocoagulation (Fig. 2.14). This treatment is useful to reduce the vascular endothelial growth factor production and to increase oxygen diffusion from the choroid [37, 38]. Widefield fluorescein angiography allows for treating specific areas of retinal non-perfusion while using less energy and sparing relatively better perfused tissue from laser-induced tissue scarring.

Also in patients with retinal vein occlusion, ultra widefield angiography may be a powerful

**Fig. 2.14** Fluorescein angiography images of a patient with proliferative diabetic retinopathy: (**a**) 30° field of view, (**b**) WFO 55°, (**c**) 60° with Ocular Lee-Mainster

SLO Lens; (**d**) UWF 102° and (**e**) 150° with the Ocular Staurenghi contact lens

tool to identify therapeutic target areas for photocoagulation, allowing for efficient treatment of ischemic retina, and for potentially minimizing collateral destruction of adjacent viable perfused retina [39].

Widefield angiography is also used in patients with uveitis. The diagnosis and management of uveitis is challenging. Accurate diagnosis, definitions of activity and response to treatment are typically defined by clinical and angiographic appearance [40]. Some features of posterior uveitis, such as perivascular sheathing, peripheral capillary non-perfusion, venous staining or leakage, cystoid macular edema, and disc edema, could be detected by widefield angiography [41]. This technique allows for detecting images of both central and peripheral retina simultaneously giving more precise details then montage reconstruction.

Intraocular choroidal tumors are better visualized using wide field angiography. Differential diagnosis is supported by using multimodal imaging approach. Abnormal choroidal vessel or intrinsic vessels as in hemangioma or in melanoma are better seen in widefield angiography allowing for a more precise diagnosis.

Imaging the peripheral retina has significantly improved over the past years. Widefield angiographic technology has become an important clinical tool with regards to early diagnosis, treatment and monitoring of most sight-threatening retinal and choroidal diseases.

#### **2.4 Quantitative Autofluorescence of the Retina**

Confocal scanning laser ophthalmoscopy has been the imaging system of choice for auto-fluorescence (AF) imaging because of its high sensitivity and its image averaging capabilities that are required to record the fundus AF with acceptable signal/noise ratios using safe retinal exposures. The first clinical AF imaging systems were introduced in the mid-1990s [42, 43] using an excitation wavelength of 488 nm. Subsequent developments and the introduction of several commercial imaging platforms have further broadened the field and allowed AF imaging to become an important imaging modality for clinical diagnosis [44].

#### **2.4.1 Origin and Spectral Characteristics of Fundus Auto-Fluorescence (AF)**

The fluorophore responsible for AF of the fundus is principally lipofuscin residing in the retinal pigment epithelium [45]. Lipofuscin is a byproduct of the visual cycle. The tips of the outer segments of photoreceptor are damaged by photo-oxidation and are phagocytosed on a daily basis. These materials contain poly-saturated fatty acids and byproducts of the visual cycle that are partially digested in the RPE. A small fraction is chemically incompatible for degradation and accumulates in lysosomes of the RPE as lipofuscin, a mixture of various fluorophores. Chemically some of these compounds have been identified and synthetized as bisretinoids [46, 47].

In-vivo spectrophotometric and imaging studies of AF have shown that the fluorescence can be excited from 430 nm to about 600 nm [45, 48, 49]. In healthy subjects, the excitation spectrum is maximum around 500 nm with a shift towards longer wavelengths with increasing age (Fig. 2.15). The fluorescence is emitted over a broad spectral band extending from the excitation wavelength to about 800 nm, with a maximum shifting from 600 nm to 660 nm as the excitation wavelength is changed from 430 nm to 600 nm,

**Fig. 2.15** Average excitation spectra (left) and emission spectra (right) of fundus auto-fluorescence: Blue spectra: group of young healthy subjects (n = 26; 15–28 years), and Red spectra: group of old healthy subjects (54–67 years). The excitation spectra represent the emission at 620 nm plotted against the respective excitation wavelength (430, 470, 510, 550 nm). The error bars are the 95% Confidence Intervals. All spectra were corrected to account for the absorption of the ocular media. Interrupted lines (left panel) are uncorrected (as measured outside of the eye)

respectively. This "red-shift" occurs for excitations at the long wavelength end of the absorption spectrum of some fluorophore in viscous or polar environment [50]. The emission spectra in healthy subjects shift slightly towards shorter wavelengths with increasing age, a trend that is accentuated in AMD [51]. This could be the result of oxidation of lipofuscin and/or due to a fluorescence contribution of Bruch's membrane deposit.

#### **2.4.2 Quantitative Auto-Fluorescence (AF) Imaging**

Quantitative measurements of fundus AF from images acquired with SLO would be possible if one also measured the laser power, and recorded the sensitivity and zero value after acquiring each image. Grey levels in the image and the gain versus sensitivity calibration then allow quantification of AF levels. However this was judged not practical because these measurements are not necessarily easy to perform. Instead, a fluorescent standard was incorporated as a reference in a Spectralis SLO path (Heidelberg Engineering. Heidelberg. Germany). The reference is located in a plane conjugate to the retina, so that it is always in focus with the fundus image (Fig. 2.16). Average grey levels of the reference allow accounting for the effect of variation in laser power and detector gain [52].

Successful acquisition of images for quantifying fundus AF depends in large part on the skills of the operator, and a dedicated operator is highly recommended. Detailed protocols for acquiring optimal AF images have been published [53] including special protocols used in quantitative work [54]. In essence, the fundus of the dilated test eye (>6 mm diameter) is first illuminated with 488 nm light for a 20–30 s long period to reduce AF attenuation by photo-pigment absorp-

**Fig. 2.16** Reference and zero strip as implemented by Heidelberg Engineering. Image from Janet Sparrow, PhD (Columbia University)

tion (bleaching). Focus and camera alignment are optimized during that period. Critical uniformity AF signal over the whole field is attained by fine axial adjustment of the camera position. The sensitivity is adjusted to avoid non-linear effects for both the fundus and the internal reference (see later). After final alignment a 'video' of 9 or 12 frames is acquired. After rejection of low quality frames (eye-movement, iris obstruction), the remaining frames are aligned and averaged and saved in the "non-normalized" mode (no histogram stretching) to create the AF image for analysis.

The AF is then quantified as *qAF* by comparing the grey levels (*GL*) of every pixel of the fundus image with the mean *GL* measured at of the internal reference, accounting for the zero-signal (no light):

$$qAF = RCF \times \frac{GL\_{\text{Fandra}} - GL\_{\text{Zero}}}{GL\_{\text{Ag}\_{\text{2}@{\text{two}}}} - GL\_{\text{zero}}} \times \left(\frac{M\_{\text{cm}}}{M}\right)^2 \times 10^{\frac{5.96 \times 10^{-5} \times \left(4g\alpha^2 - 400\right)}{\left(4g\alpha^2 - 400\right)}}\tag{2.9}$$

**Fig. 2.17** The magnification correction factor (Mem/M)2 is a function of refractive error and of radius of curvature of the cornea. The factors were computed on the basis of Gullstrand eye model (No. 2) and the optical characteristics of the Spectralis. The cloud of grey points represents the distribution of magnification correction factors and refractive errors for the healthy subjects. The average magnification factor was 1.06 and the 95% confidence interval was 0.87–1.25

*RCF* is the 'Reference Calibration Factor' obtained from calibration of the internal reference with a Master fluorescent target. The (*M*em/*M*) 2 term accounts for differences in magnification between the test eye and an emmetropic eye (Fig. 2.17).

The last term of the Eq. (2.9) accounts for the absorption of the excitation light and the fluorescence emission by the ocular media. We have used the algorithm of van de Kraats and van Norren [55] to estimate the average optical density of the ocular media at a given age. In order to eliminate some unknown terms, this density was expressed relative to the media density at age 20 years [56]. Thus, *qAF* reflects fundus AF relative to that which would be measured through the media of a 20-year-old emmetropic eye with average ocular dimensions.

After acquiring mean *GL*'s from the internal reference and zero-strip (or the info panel), all pixels' GL of the image are converted into *qAF*

**Fig. 2.18** The standard sampling areas are eight segments organized in an annulus around the fovea. The inner and outer radii of this ring are 0.58 × FD and 0.78 × FD, respectively, where FD is the horizontal distance between the edge of the disc and the foveal center. The histogram for each segment is fitted with two Gaussians allowing for the separation of fundus pixels (red) and in some cases pixels from vessels (blue). The green bracket shows the range, over which GLs are integrated to yield a mean value for that segment. The GL for the optic disk is also calculated and serves as criterion to define 'atrophy' when analyzing retinal degenerations

by applying Eq. (2.9). The resulting *qAF* image is analyzed with dedicated software. The same program was duplicated in the Heidelberg version of its qAF software. Mean *qAF*'s are then computed at eight standard locations in the fundus (Fig. 2.18). To obtain a single qAF metric for an eye, we generally average the *qAF* of segments of the middle ring. This average, *qAF8*, minimizes the variation of the natural distribution of AF at the posterior pole; it is highest in the superotemporal quadrant and lowest in the infero-nasal quadrant [57, 58].

**Fig. 2.19** Variation of the fundus grey levels (GL, corrected for zero level) as a function of the fluorescence intensity measured outside the eye (last two terms of Eq. 2.1 account for intraocular parameters). The blue points represent the actual operation points for GL's in the fundus image at about 8° from the fovea for a population of healthy eyes [58] and those with retinal dystrophies [59, 60]. Similarly grey points correspond to the levels at the fovea. Curves of constant sensitivity indicate non-linearity at high GL

A critical condition for the validity of Eq. (2.9) is that the exposure of the detector is in the linear range of the *GL* versus light exposure characteristics (Fig. 2.19). Non-linear behavior occurs at high *GL*'s because noise may cause some of the exposures to correspond to *GL* > 255. This 'saturation' can be avoided by limiting the *GL*'s to those indicated by curve A (the software produces pixel-size colored flashes when this occurs). Similarly, at low *GL* many of the *GL* may in fact correspond to *GL* < 0 (non-linearity not shown). Keeping the sensitivities higher than that shown by curve B (71) will prevent this error and insure that *GL*(Reference)- *GL*(Zero) remains above about 20. Finally, at sensitivities higher than about 91 (line C) the detector's gain becomes extremely variable and unstable.

#### **2.4.3 Research Studies**

qAF methodology has been used for normative studies [58, 61], and in investigations of patients affected by Best vitelliform macular dystrophy [60], recessive Stargardt disease [59], Bull's eye maculopathy [62], Retinitis Pigmentosa [63], and Age related macular degeneration [64]. Additionally, studies of subjects with a monoallelic ABCA4 mutation were also reported [61, 65]. The method appears robust when the image quality is good particularly in terms of uniformity. In different studies, repeatability for two sessions on the same day varied from ±6% to 10% (95%CI) and concordance between eyes was ±13–20%. These studies have demonstrated that quantification of AF with this standardized approach can aid in assessing whether specific fundus areas in pathological conditions have normal or abnormal AF levels, in providing valuable genotype-phenotype correlations and in studying the natural history of disease progression in contrast to normal aging. However, long-term repeatability will have to be systematically investigated before longitudinal studies are undertaken.

#### **2.5 Summary and Conclusion**

The scanning laser ophthalmoscope has, since its invention in 1980, undergone numerous improvements and has evolved in a sophisticated and versatile imaging modality. Today, SLO imaging is a very established tool in clinical routine. Two examples, the scanning laser tomography for glaucoma diagnostics (Sect. 2.2), as well as the wide field SLO angiography and its importance for the assessment of various retinal and choroidal diseases (Sect. 2.3), have been discussed in detail. But also other SLO applications as multicolor imaging [66] and autofluorescence imaging [67] are successfully used in clinical routine for the diagnoses and progression control of various diseases as e.g. age-related macular degeneration (AMD) and hereditary macular dystrophies.

Furthermore, in many devices SLO technology is used to provide a reference image in order to actively track eye movements and thus stabilize the area of interest for other examination technologies as optical coherence tomography (OCT) (Chap. 3), OCT angiography (Chap. 6), fluorescence lifetime imaging ophthalmoscopy (FLIO, Chap. 10) and microperimetry [68].

Finally, SLO autofluorescence imaging is a very active field of research, since it visualize the distribution of intrinsic fluorophores (mainly lipofuscin components), which play an important role in the visual cycle (renewal) of photoreceptors. Therefore, the quantification of the intrinsic fluorescence as described in Sect. 2.4 can provide a better understanding of the natural history of diseases and their pathological mechanisms.

#### **References**


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Optical Coherence Tomography (OCT): Principle and Technical Realization**

Silke Aumann, Sabine Donner, Jörg Fischer, and Frank Müller

#### **3.1 Introduction**

Optical coherence tomography (OCT) is a noncontact imaging technique which generates cross-sectional images of tissue with high resolution. Therefore it is especially valuable in organs, where traditional microscopic tissue diagnosis by means of biopsy is not available—such as the human eye.

Since OCT is completely noninvasive, it provides in vivo images without impacting the tissue that is imaged. Fast scanning rates and quick signal processing allows for image visualization in real time and at video rate. As shown in Fig. 3.1, the resolution of OCT is much higher than that of other medical imaging methods like ultrasound or magnetic resonance imaging (MRI). It combines an axial resolution that can reach that of confocal microscopy with a lateral resolution comparable to confocal scanning laser ophthalmoscopy. Typically OCT systems have a resolution of 20–5 μm. Due to the interferometric measurement method, the axial resolution is defined by the light source, not the focusing optics. Therefore it is overcoming the limitations of optical focusing due to the limited pupil size of the eye. The extended focus and the operation with light in the near-infrared maintain a penetra-

S. Aumann (\*) · S. Donner · J. Fischer · F. Müller Heidelberg Engineering GmbH, Heidelberg, Germany

tion depth of a few hundred microns, covering the whole retina.

With a lack of alternative diagnostic tools for depth resolved assessment of the retina, and the distinct characteristics of OCT it is no surprise that the first commercially available OCT was an ophthalmic imaging device. It entered the marked in 1996, only 5 years after the inception of OCT was founded. Despite the technological promise that OCT offered, in the first years only a total of ~180 units were sold until 1999 [1]. This can be understood examining the technology that was initially introduced to the market. Time-domain OCT technology (TD-OCT, see subsequent section for the working principles) requires acquisition of a depth scan for every location and subsequently offers very slow imaging speed and poor image quality. Usability and the impact of the noisy images on clinical diagnosis limited adaption of this new technology.

The introduction of spectral domain OCT (SD-OCT) was able to overcome the limitations of TD-OCT. Image quality and imaging speed were significantly improved by SD-OCT, which is able to capture the whole depth information simultaneously.

In 2006 Heidelberg Engineering introduced SPECTRALIS—the first imaging platform that combined SD-OCT technology with a scanning laser ophthalmoscope (SLO). Use of the SLO facilitates co-localization of the fundus scan with the cross-sectional OCT images and opened up previ-

**Fig. 3.1** Comparison of resolution in axial and lateral direction between some medical imaging techniques for different body parts. Skin/cornea: reflectance confocal microscopy (RCM). Retina: confocal scanning laser ophthalmoscopy (cSLO), adaptive optics scanning laser ophthalmoscopy (AOSLO), optical coherence tomography (OCT), adaptive optics optical coherence tomography (AO-OCT). General: magnetic resonance imaging (MRI), computed tomography (CT), medical ultrasound

ously unknown diagnostic possibilities. Through this technological combination this instrument is capable of precise motion tracking, allowing for rescanning the same location at a later point in time for follow-up assessment and therapy control.

Incorporating functions that build upon this OCT technology created a clinical need for OCT and it has become the standard tool for imaging in macula diseases, diabetic retinopathy and glaucoma, to name a few examples from a wide range of retinal applications. The ability to segment retinal layers allows for thickness measurement, which improves glaucoma diagnosis, because thinning of the nerve fiber layer marks the onset and progression of the disease. The anterior segment of the eye also benefits from OCT imaging. Biometric measurements of the eye's anatomy including the axial eye length allow for precise choice of intraocular lenses.

This chapter of the book will concentrate on the technical implementation of general OCT technology and on the SPECTRALIS instrument. This information will support the clinical chapters within this book and offers context to how technology impacts the various applications of OCT in the eye. First, the working principle will be explained, followed by some technical parameters like resolution, sensitivity and roll-off which are important measures to rate and select OCT systems. The chapter will continue with the implementation of OCT together with confocal scanning laser ophthalmoscopy for motion tracking during OCT acquisition and follow-up functionality in SPECTRALIS. Further analysis of the OCT signal allows for functional extensions of OCT-imaging to detect blood flow and tissue properties like birefringence and elasticity. The last section of this chapter gives a summary of functional OCT methods.

#### **3.2 Technique and Theory of OCT**

#### **3.2.1 Principle Idea of OCT**

OCT is often compared to medical ultrasound because of the similar working principles. Both medical imaging techniques direct waves to the tissue under examination, where the waves echo off the tissue structure. The back reflected waves are analyzed and their delay is measured to reveal the depth in which the reflection occurred. OCT uses light in the near-infrared, which travels much faster than ultrasound. The delays of the backreflected waves cannot be measured directly, so a reference measurement is used. Through the use of an interferometer, part of the light is directed to the sample and another portion is sent to a reference arm with a well-known length.

The idea of low-coherence interferometry is the underlying principle for all OCT implementations. Temporal coherence is a property of a light source and characterizes the temporal continuity of a wave train sent out by the source and measured at a given point in space. Wave trains emerging from a light source of low temporal coherence maintain a fixed phase relation only over a very limited time interval corresponding to a confined travel range, the coherence length or coherence gate. A light source with a broad spectral bandwidth is composed of a range of wavelengths. Such a broadband source has low coherence, while monochromatic laser light has a narrow spectral line and features a coherence length of at least several meters. An interferometer splits light, coming from a source, into two separate paths and combines the light coming back from the two paths at the interferometer output. There, under certain conditions, interference can be observed: coherent waves superimpose and their electromagnetic field amplitudes add constructively (i.e. they reinforce each other) or destructively (i.e. they cancel out each other) or meet any condition in between. The associated light intensity can be measured as an electrical signal using a photo detector. This signal is a function of the difference in optical path length between both arms. For a low coherent light source (like a SLD or a pulsed laser source) interference is only possible if the optical paths are matched to be equal in length within the short coherence length of the source, which usually is in the order of micrometers.

#### **3.2.2 Technical realizations of OCT**

In the first implementation of OCT [2], the reference length was modulated for each depth scan and the record of the intensity of the combined light at the sensor gave the reflectance profile of the sample. This variant is called timedomain OCT (TD-OCT) and the main setup is shown in Fig. 3.2.

As depicted, the light of a low-coherence source is guided to the interferometer, which in this example is a fiber-based implementation. In a system using bulk optics the fiber coupler is replaced by a beam splitter. The input beam is split into the sample beam and into the reference beam travelling to a mirror on a translational stage. The back-reflected light from each arm is combined and only interferes if the optical path lengths match and therefore the time travelled by the light is nearly equal in both arms. Modulations in intensity, also called interference fringe bursts, are detected by the photodiode. The amount of back-reflection or back-scattering from the sample is derived directly by the envelope of this signal (see Fig. 3.2, lower row).

For each sample point, the reference mirror is scanned in depth (*z*) direction and the light intensity is recorded on the photo detector. Thereby a complete depth profile of the sample reflectivity at the beam position is generated, which—in analogy to ultrasound imaging—is called A-scan (amplitude scan).

To create a cross-sectional image (or B-Scan), the sample beam is scanned laterally across the sample. This abbreviation originated in ultrasound imaging, where B-Scan means brightness scan.

**Fig. 3.2** Working principle of TD-OCT: light from the light source is split into the reference beam and the central beam. Back reflected light from both arms is combined again and recorded by the detector. To record one depth profile of the sample (A-scan) the reference arm needs to be scanned. This has to be repeated for each lateral scan position. Figure reprinted from [3]

Fourier domain OCT (FD-OCT, also frequency domain OCT) is the second generation of OCT technology and provides a more efficient implementation of the principle of low-coherence interferometry. In contrast to TD-OCT, FD-OCT uses spectral information to generate A-scans without the need for mechanical scanning of the optical path length.

Two methods were established to acquire the spectral information of the interferometric signal. Both record an interference spectrum, also called spectral interferogram, from which the A-scan is computed via Fourier transformation. The unique properties of this interferogram are given in more detail later together with a simplified mathematical description.

Spectrometer based FD-OCT, which is commonly referred to as spectral domain OCT (SD-OCT) was first proposed by Fercher et al. in 1995 [4]. The principle optical setup is depicted in Fig. 3.3 (top left): it is similar to TD-OCT, but the point detector is replaced by a spectrometer. The spectrometer uses a diffractive element to spatially separate the different wavelength contributions into a line image which is recorded by a high speed line scan camera. Each read-out of the camera constitutes a spectral interferogram with a superposition of fringe patterns, as will be explained below. A superluminescent diode (SLD) is commonly chosen as a broadband light source, because it features a large bandwidth and a relatively high power output.

**Spectrometer based Swept source based**

**Fig. 3.3** Optical setup of spectrometer based OCT (SD-OCT) in the upper left inset and swept source OCT (SS-OCT) in the upper right inset. While SD-OCT uses a spectrometer for wavelength separation, SS-OCT features a light source which sweeps the wavelength in time. Both

implementations record an interference spectrum which carries the depth in formation of the sample. FFT is used to transform the interference signal into the A-scan. (Figure taken from Drexler et al. [5])

The principle of Swept-source OCT (SS-OCT) has first been demonstrated 2 years after SD-OCT in 1997 [6] and was immediately applied in ophthalmology for measurement of intraocular distances [7]. The optical setup is similar to TD-OCT, but the broadband light source is replaced by an optical source which rapidly sweeps a narrow line-width over a broad range of wavelengths, see top right inset of Fig. 3.3. During one sweep, each wavelength component of the interferometric signal is detected sequentially by a high speed photodetector. Commercially available sources can realize high sweep rates (>100 kHz), which require ultrafast detection and analog-digital (AD) conversion in the GHz range. One wavelength sweep constitutes a spectral interferogram with fringe patterns, as in SD-OCT.

For each sample point, this spectral interferogram is recorded as is shown exemplarily in the lower left inset of Fig. 3.3. The original source spectrum (black solid line) is modulated with numerous rapid oscillations. Different to TD-OCT, the interferogram contains information for all depth layers of the sample simultaneously. To extract their individual contribution as a function of their depth position, Fourier Transformation is required. The amplitude of the complex-valued Fourier transform is squared to yield power values. The resulting A-scan (see bottom right inset of Fig. 3.3) includes a mirror term, which is rejected in the final image and is attributed to inherent properties of the Fourier transform.

Comparing the two implementations of FD-OCT, equivalent parameters are used to describe and quantify the system's performance. For example, in SD-OCT, the acquisition speed is limited by the linescan rate of the camera, whereas in SS-OCT it is given by the sweep rate of the swept-source and subsequent AD conversion. Additional measures of performance are described in more detail in the section on the image properties of OCT.

Compared to TD-OCT, the spectral OCT techniques have allowed for a dramatic increase in signal-to-noise ratio (SNR) and imaging speed [8–10]. They have paved the way for volumetric and real-time imaging in ophthalmology, a field that is highly impacted by sample motion.

#### **3.2.3 Signal formation in OCT**

To fully appreciate the working principle of FD-OCT and to understand the formation of the spectral interferogram, a closer look at the signal formation is given in the following section.

First, a sample consisting of one discrete layer at depth position *z* is considered. *z* is defined as half of the optical path length difference between the reference mirror and the sample layer. For a given *z*, conditions of constructive and destructive interference alternate as a function of wavelengths of the broadband source, resulting in a periodic modulation of the source spectrum. An example of such a fringe pattern is shown for two reflective layers at different depth in Fig. 3.4a, b. The modulation frequency or fringe spacing is uniquely linked to the depth position *z* by *k* = *π*/*z*, The larger *z*, the narrower are the fringes and the higher is the corresponding modulation frequency.

The associated modulation amplitude is proportional to *R* , where *R* denotes the power reflectivity of the sample layer. The total interferogram consists of a superposition of both single interferograms (Fig. 3.4c). Fourier Transformation and conversion to power values generates the reflectivity profile of the sample.

In the more general case, a sample of extended depth and multiple reflective layers gives rise to a superposition of many different modulation patterns, each with a specific frequency and amplitude. In the following, a simplified mathematical description is given for this case.

Each of the N layers is characterized by its depth position *zn* and its ability to reflect or backscatter light given by *Rn*. *Zn* is defined as half of the optical path length difference between the reference mirror and the nth layer of the sample.

The optical power density *S*(*k*) of the light source is described as a function of wavenumber *k* = 2*π*/*λ* as is standard practice in OCT literature. The spectral interferogram *ID*(*k*) is then given by

$$I\_D\left(k\right) \propto S\left(k\right) \sum\_{n=1}^{N} \sqrt{R\_n R\_{\bar{n}}} \left(\cos 2k\varepsilon\_n\right), \quad (3.1)$$

where *RR* denotes the reflectivity of the reference mirror. For the sake of simplicity, only the

**Fig. 3.4** Spectral OCT-Interferograms: (**a**) Interference fringes caused by a single reflector at 50 μm with a reflectivity of 10%. (**b**) Same as (**a**) but reflector at 300 μm with 5% reflectivity. The comparison of (**a**) and (**b**) reflects the fact, that the z-depth of the signal is encoded in the frequency k of the interference modulation, whereas the reflectivity of the reference arm (*R*r) and of the backscat-

tering sample surface (*R*s) determines the amplitude of the modulation signal. (**c**) Interferogram with both reflectors and (**d**) the OCT A-scan, which is calculated from (**c**) by Fourier Transformation. The smaller autocorrelation signal is caused by interference of light reflected at R1 and R2 (red arrow)

term which encodes the sample properties is shown, which represents the cross-correlation of the electric field amplitudes of the sample and the reference arm. Also, the refractive index of the sample is omitted and absorption is neglected. Generally, a constant (DC) term and an auto-correlation term, which accounts for self-interference within the sample, contribute to the final spectral interferogram as well. The interested reader may refer to [11] for a comprehensive derivation.

Eq. (3.1) is now considered for TD-OCT, where two details are essential: Firstly, the photodetector used for TD-OCT cannot resolve the individual spectral contributions of the source *k* to the measured signal *ID*. Mathematically, the detection corresponds to an integration of *ID*(*k*) over the bandwidth range of the source. Secondly, the reference arm is scanned, the detected signal thereby getting a function of the reference arm position *zR*. The photodetector signal is then given by:

$$I\_D\left(z\_{\mathcal{R}}\right) \propto S\_0 \sum\_{\boldsymbol{\mu}=\boldsymbol{1}}^{\boldsymbol{N}} \gamma\left(z\_{\boldsymbol{\mu}}\right) \sqrt{R\_{\boldsymbol{\mu}} R\_{\boldsymbol{\mu}}} \left(\cos 2k\_0 z\_{\boldsymbol{\mu}}\right) \quad (3.2)$$

*S*0 is the spectrally integrated power of the source and the coherence envelope *γ*(*zn*) is the inverse Fourier transform of the normalized power spectrum *S*(*k*). For a Gaussian shaped spectrum, the coherence function, also sometimes referred to as fringe-visibility, is given by:

$$\gamma\left(z\_{\kappa}\right) = \exp\left(-\ln 2 \frac{2z\_{\kappa}}{l\_{c}}\right) \qquad\qquad(3.3)$$

The coherence envelope quickly drops to zero if 2z*n* > lc, i.e the optical path length difference exceeds the coherence length of the light source *lc*, thereby acting as a depth selector. While scanning the reference mirror this coherence gate is shifted through the sample. The resulting sample reflectivity profile is convolved with the coherence function and is modulated by a cosinusoidal carrier. Eq. (3.2) is a mathematical description of the interference fringe bursts described in Fig. 3.2.

Different to TD-OCT, FD-OCT acquires the spectral interferogram *ID*(*k*) as described by Eq. (3.1), i.e. spectrally resolved and containing signal components of the whole depth simultaneously. To access the sample reflectivity profile, inverse Fourier transformation has to be applied to Eq. (3.1), which finally yields the A-scan:

$$I\_D\left(z\right) \propto \sum\_{\boldsymbol{\mu}=\boldsymbol{1}}^N \sqrt{\boldsymbol{R}\_\boldsymbol{\boldsymbol{R}} \boldsymbol{R}\_\boldsymbol{\boldsymbol{R}}} \left[\boldsymbol{\gamma}\left(2z\_\boldsymbol{\boldsymbol{z}}\right) + \boldsymbol{\gamma}\left(-2z\_\boldsymbol{\boldsymbol{z}}\right)\right] \quad (3.4)$$

with *γ*(*zn*) as defined by Eq. (3.3). Again, as in TD-OCT, for each reflector site the detected signal *Rn* is convolved with the coherence function, which therefore defines the axial point-spread function of the system. Because *ID* (*k*) is a realvalued function, its complex-valued Fourier transform has an ambiguity between positive and negative frequencies, which gives rise to the mirror terms in Eq. (3.4). From the spectral interferogram it is not possible to decide if the optical path difference between sample arm and reference arm is positive or negative. Note, that only the Fourier amplitude is shown in Eq. (3.4), the phase is omitted. The Fourier amplitudes are squared to obtain power values, that represent the OCT signal in structural OCT images.

In the following, the main characteristic properties of OCT images are presented. If not stated otherwise, these properties are equal for all three OCT variants.

#### **3.2.4 Lateral and Axial Resolution and Image Dimensions**

In OCT, the axial and lateral properties are decoupled from each other. Lateral resolution is defined by the objective and the focusing media in front of the sample. While all axial properties of the interferometric technique are defined by the coherence properties of the light source and the sampling of the signal at the detector. This unique property of OCT can be used in retinal imaging to achieve high axial resolution despite the limited pupil diameter of the eye.

Like described in the previous section, the image information in axial direction along the A-scan is reconstructed from an interferometric measurement of delays of light which is backscattered or reflected from the sample. Therefore the properties of the light source and the sampling of the interferometric signal define the axial properties of the OCT system. The axial resolution in air *δz* of an OCT system equals the roundtrip coherence length of the source and is defined by its wavelength *λ*0 and its spectral bandwidth Δ*λ* [3]:

$$\delta z = l\_c = \frac{2\ln\left(2\right)}{\pi} \cdot \frac{\lambda\_0^{\ast^2}}{\Delta \lambda\_{FWHM}}.\tag{3.5}$$

The spectral bandwidth Δ*λFWHM* is the wavelength range of the source, defined as the width at the intensity level equal to half the maximum intensity (FWHM, full width at half maximum). In the lower left inset of Fig. 3.3 the bandwidth is labeled by the wavenumber equivalent Δ*k.* Wavenumber *k* can be converted to wavelength by *λ* = 2*π*/*k*.

The central wavelength of OCT-systems is chosen to achieve maximal penetration depth into the tissue under examination. For ophthalmic systems, the wavelength is usually around 850 nm or around 1050 nm, to allow light penetration through the retinal pigment epithelium (REP) and thereby enable imaging of the choroid. Another important consideration is absorption of ocular media as it causes attenuation of

**Fig. 3.5** OCT axial resolution depends on the spectral bandwidth of the light source and on the center wavelength. The exemplary plots of identical axial resolution in the eye show that the bandwidth needs to be increased for longer

the light which reaches the retina and further reduction of the signal light on the way back towards the detector. Absorption of the ocular media is very similar to that of water, which is depicted in Fig. 3.5.

Figure 3.5 also shows a family of curves based on Eq. (3.5), on which the axial resolution is constant. It is evident that, for a longer wavelength, the bandwidth of the light source needs to be increased to achieve the same axial resolution. The water absorption curve (dashed red line) shows that absorption is increased for 1050 nm compared to 850 nm. The spectral width of the absorption dip limits the maximum achievable resolution, e.g. for 2 μm (green solid line) a bandwidth of 175 nm would be needed exceeding the width of the spectral window.

Axial imaging depth defines the axial range which is covered in a B-Scan. It is defined by the maximum fringe frequency which can be detected, because maximum frequency of the interference spectrum decodes the maximum depth (see exemplary interferogram in Fig. 3.4). center wavelengths to maintain the same resolution. As indicated by the dotted curve of the absorption coefficient of water [12], not all wavelengths are equally suitable. For greater wavelengths, the eye is considerably less transparent

Therefore the imaging depth *zmax* is defined by the number sample points *N* on the full recorded spectral width Δ*λ*:

$$z\_{\rm max} = \frac{1}{4} \frac{\lambda\_0^{\,^2}}{\Delta \mathcal{X}} N. \tag{3.6}$$

In SD-OCT systems, *N* is given the number of pixels of the line detector the spectrum is imaged on. For SS-OCT it is given by the number of readouts of the photo diode during one sweep of the light source. The maximum imaging range, divided by 0.5 *N* gives the axial sampling of the B-Scan. This number characterizes how many micrometer per pixels are imaged and provide the axial scaling of the scan. It is often mistaken with the axial resolution, which defines the minimal distance of structures, which can still be distinguished in the OCT-B-Scan.

All lateral system parameters of an OCT system depend on the focusing optics and in particular on the numerical aperture (NA, see Chap. 2), as well as on the sampling density and scanning

**Fig. 3.6** Left: Lateral image parameters of retinal OCT depend on the focusing of the probing beam by the human eye. Right: Schematic of the sampling of an OCT volume

amplitude of the scan system. The same equations describing lateral image parameters hold for cSLO and OCT imaging. A schematic overview is shown in the left part of Fig. 3.6.

The lateral resolution is given by the spot size of the probing beam. For a Gaussian beam profile, the spot size is defined as radius *w*0 of the beam waist, where the intensity drops to 1/e². In contrast to that, the lateral resolution is defined by the beam diameter at half maximum (FWHM). This is taken into account by multiplying the double beam radius by 2 2 ln , leading to the expression:

$$\delta \mathbf{x} = \sqrt{2 \ln 2} w\_0 = \sqrt{2 \ln 2} \frac{2 \lambda\_0}{\pi} \frac{f\_{sys,\,}}{n \cdot d} = \sqrt{2 \ln 2} \frac{\lambda\_0}{\pi \, NA} \tag{3.7}$$

Here, *fsys* denotes the focusing length of the optical system, n the refractive index of the media and *d* the diameter of the beam (decay to 1/e²) entering the focusing lens. Tight focusing would result in a higher lateral resolution, but at the same time it reduces the depth of focus. The depth of focus *b* (sometimes also denominated as confocal parameter) determines the axial range, where the beam waist w w ( )*z* £ × 2 <sup>0</sup> and is defined as:

$$b = \frac{2\pi \cdot n}{\mathcal{A}\_0} \left| \mathbf{w}\_0 \right|^2 = \frac{n \cdot \mathcal{A}\_0}{2\pi \cdot NA^2} \qquad\qquad(3.8)$$

Thus, the focal volume is defined by its width *δx* and the axial extension *b*. Outside the focal volume the intensity coming back from the sample is reduced considerably. Therefore a compromise of focal depth and lateral resolution needs to be found with the optical design of the OCT system. In retinal OCT, with a focal length of the eye of *feye* = 16.7 mm and the refractive index *nvitreous* = 1.336, the lateral resolution is typically about 10 μm, resulting in a depth of focus of approximately 700 μm.

As OCT measures optical delays, all axial distances are optical distances. To achieve scaling in geometrical distances to allow for instance thickness measurements, the refractive index *n* of the medium needs to be known and axial distances measured in OCT scans are divided by the refractive index *n*.

To cover a lateral field of view (FOV), the incident OCT beam is scanned. The maximum scan angle *Θmax* defines the maximum field of view. To record a 3D data set, the sample beam is stepped in the second lateral direction after each B-Scan, as shown in the right part of Fig. 3.6. The recorded B-Scan series is stacked together. From this volume, a transversal image can be calculated, referred to as enface OCT image. The step width of the scanner defines the lateral sampling in both directions. Usually a B-Scan is sampled more densely than the slow direction *y* of a volume.

#### **3.2.5 Sensitivity and Roll-Off**

The OCT A-scan presents a profile of backscattered light intensity over tissue depth. The height of a signal compared to the image noise floor is called signal to noise ratio (SNR). The SNR is different for each individual structure, because the signal strength is determined by the backscattering properties, often referred to as reflectivity. Backscattering originates from local changes in refractive index within the tissue due to alterations in the microscopic structure or in the density of scattering particles. The detection of reflectivity enables OCT to reveal the internal structure of an object and is particularly useful to visualize its layer architecture. However, without elaborate modelling, the OCT signal does not provide an absolute quantitative measure of local reflectivity. Due to absorption and scattering in the upper layers less light will reach the lower layers and backscattered light from lower layers is attenuated on its return path again.

Sensitivity has been established as a useful figure of merit to characterize or compare the performance of an OCT system. It is defined by the minimum sample reflectance the system can detect by achieving a SNR of 1. In OCT, the SNR is calculated as the ratio of the OCT power value to the standard deviation of the background power and therefore is proportional to the sample reflectance *R*.

An OCT signal which is generated by specular reflection of an ideal mirror (i.e. *R* = 1) generates an SNR equal to the sensitivity of the OCT system. SNR and sensitivity are commonly specified in units of power decibel (dB) denoting a logarithmic scaling of the OCT power values. FD-OCT systems can achieve a sensitivity of 100 dB and more, which corresponds to the ability to detect even very weakly reflecting structures with a reflectivity as low as *R* = 10−10.

In the retina, the retinal pigment epithelium (RPE) and the internal limiting membrane (ILM) yield high OCT signals. Single A-scans sometimes can be affected by specular reflection on the ILM or the center of the macula. The maximum signal level and the noise floor span a range of about 40 dB for healthy retinal tissue and clear media.

In a linear scale, the OCT power values exceed the limited number of distinct grey values of common display devices and the perception of the human eye. Therefore, the power signal needs to be mapped to grey scale in a meaningful way. Usually, a logarithmic transformation or a comparable mathematical operation is first applied to the data, compressing the distribution of power values to approach a more Gaussian-like shape. The resulting data is then mapped to 8 bit grey values. The mapping can be further adapted by applying different curves for gamma-correction. This allows to variably assign a range of signal power levels within an OCT B-Scan to a range of grey values and thus increase the contrast for distinct regions of interest.

The highest attainable sensitivity in FD-OCT is limited only by shot noise. This means that compared to the inherent and unavoidable characteristic noise of photons, other noise sources can be neglected. The sensitivity is then given by the number of photons which can be detected from a sample with reflectivity *R* = 1. Therefore, it depends linearly on the incident optical power, the efficiency of photon detection and the sensor integration time. Consequently, there is a principal tradeoff between acquisition speed and system sensitivity.

Every FD-OCT system has a characteristic decrease in sensitivity with imaging depth, also called roll-off. It is related to the finite spectral resolution of the system component providing spectral separation. As shown in Fig. 3.4a, b, deeper layers are encoded in fringes with higher frequency and therefore require higher spectral resolution than more superficial layers.

For SD-OCT, the spectral bandwidth focused onto one pixel of the line sensor needs to be resolved. Two main contributions are therefore responsible for the characteristic decrease in sensitivity: the finite pixel size of the line detector and the finite spot size created by the spectrometer optics. For SS-OCT, the spectral interferogram is sampled sequentially. Its spectral resolution is determined by the instantaneous line width of the swept laser source and may be impacted by the bandwidth of the analog-todigital conversion. SD-OCT is assumed to have a pronounced roll-off. This is not generally true, because swept laser sources used in commercial SS-OCT systems typically have a finite coherence length of several millimeters resulting in a roll-off of about 2–3 dB/mm [13].

#### **3.2.6 Signal Averaging and Speckle**

The interferometric principle of OCT gives rise to a granular intensity pattern called speckle, which inherently exists due to the coherent detection scheme of OCT.

Within the coherence volume or resolution element which is given essentially by the optical lateral and axial resolution of the system, mutual interference from multiple scattering events can occur. As a result, the OCT signal from a single resolution element can vary to a large amount and is sensitive to variations in scan geometry. Homogenously scattering tissue manifests in a speckle pattern with a typical speckle size corresponding to the size of the resolution element and the spatial average brightness reflecting the backscattering properties of the tissue.

Structural OCT images suffer from speckle noise because it might obscure small image features or hamper the recognition of layer boundaries. A common way to reduce speckle and thereby improving the visibility of structures is achieved by signal averaging. The tissue is scanned multiple times and the OCT power values are averaged to generate the final OCT B-Scan. The intrinsic variation in scan geometry together with patient movement serves the purpose to induce the necessary variation in the speckle pattern. Averaging not only reduces the speckle noise but also reduces fluctuations in background noise. The SNR constantly increases with the square root of the number of acquisitions.

However, changes in speckle pattern reflect changes in the distribution of scattering particles within the resolution element. This is used to distinguish steady tissue from moving particles for blood flow imaging (see Chap. 6).

#### **3.3 SPECTRALIS OCT**

The SPECTRALIS device (see Fig. 3.7) was introduced by Heidelberg Engineering in 2006 based on the Heidelberg Retina Angiograph 2 (HRA2). It incorporates two complementary imaging techniques: confocal scanning laser ophthalmoscopy (cSLO) and optical coherence tomography (OCT). It is a modular ophthalmic imaging platform which

**Fig. 3.7** The SPECTRALIS HRA+OCT combines confocal (cSLO) imaging with OCT and offers numerous different imaging modalities including MultiColor, Fluorescein angiography and OCTA

allows clinicians and researchers to configure their individual device by combining different imaging modalities. Depending on the integrated modalities, the device is marketed as SPECTRALIS HRA, SPECTRALIS OCT or SPECTRALIS HRA+OCT.

The cSLO part of the SPECTRALIS device offers a variety of laser sources providing different illumination wavelengths and detection schemes. These include cSLO reflectance imaging in the near infrared (IR), in the green and blue wavelength range, as well as fluorescence imaging modes for angiography (Fluorescein angiography FA, Indo-cyanin green angiography ICGA) and for autofluorescence (blue and IR). Some selected applications are presented in Chap. 2.

OCT is usually combined with IR confocal imaging, though other combinations are possible as well. Confocal imaging creates a transversal image of the retina corresponding to the en-face plane of OCT. It allows the operator to adjust the SPECTRALIS camera to target the region on the retina. Live images are presented throughout the imaging procedure to control image acquisition and quality. Furthermore, the SPECTRALIS system utilizes the IR cSLO scans for automatic motion tracking.

The SPECTRALIS OCT is based on spectral domain OCT technology, implementing a broadband superluminescent diode (SLD) for illumination and a spectrometer as detection unit. The SLD has a center wavelength of 880 nm and a spectral bandwidth of 40 nm (full-width-half-maximum, FWHM), resulting in an axial resolution of approximately 7 μm in the eye. Based on laser safety guidelines, the optical output power is limited to 1.2 mW.

The SLD, the interferometer and the scanning unit are mounted in the SPECTRALIS camera head. The interferometric OCT signal is coupled into a fiber and directed to the detection unit of the SPECTRLALIS which is located in the housing of the power supply.

The SPECTRALIS features two independent scanning units to support simultaneous cSLO and OCT imaging. The scan pupil of each unit is relayed by imaging optics including the SPECTRALIS objective onto the entrance pupil of the patient's eye. Essentially, the scan angle determines the field of view (FOV) of the imaging area on the retina, the diameter of the scan pupil (aperture) defines the diffraction limited optical lateral resolution.

The OCT scanning unit is comprised of two linear scanners, which are driven synchronously with the read-out of the line scan camera in the spectrometer. The OCT frame rate is therefore determined by the scan density (i.e. the number of A-scans within one B-Scan) and the camera's read-out time. The OCT2 module supports a line rate of 85 kHz, resulting in a frame rate of about 110 Hz for the fastest scan pattern.

As discussed in the technical section, the spectral resolution of the spectrometer determines the characteristic roll-off in sensitivity with imaging depth. Compared to the first generation OCT device (40 kHz A-scan rate), the rolloff of SPECTRALIS with OCT2 Module has been improved considerably to less than 5 dB over an imaging depth of 1.9 mm.

There is a tradeoff between acquisition speed and sensitivity: The higher the line rate, the faster the image acquisition but the less that photons can be detected. Acquisition speed therefore is inherently coupled to the sensitivity of the system. For retinal imaging, the maximum laser power is set by the exposure limit according to the laser safety guidelines. Therefore, to compensate for shorter integration time, the power can only be increased up to this limit. At the same time, eye motion, heart beat and any motion in general requires accelerated acquisition.

Some eye motion occurs at frequencies faster than the OCT frame rate and requires software algorithms to ensure precise and reliable positioning of the OCT scan pattern. Some of the most important software functionalities of the SPECTRALIS rely on software-based motion compensation: image registration, automatic real time (ART) for noise reduction, auto-rescan ability and fovea-to-disc-alignment.

The cSLO and the OCT image are simultaneously recorded and displayed side-by-side in the acquisition window of the SPECTRALIS software, like shown in Fig. 3.8. The cSLO image is used to position the selected OCT scan pattern,

**Fig. 3.8** The acquisition window of the SPECTRALIS software displays cSLO image and OCT image side-by-side: CSLO image (left) of the optic nerve head. The green line marks the selected position of the OCT B-Scan (right)

which is displayed superimposed on the fundus image (green line in Fig. 3.8). Active eye tracking (TruTrack™) then locks the scan to this position during the acquisition. This is accomplished by an algorithm that repeatedly detects motion in the SLO image and repositions the OCT beam accordingly.

As a result, the OCT image is precisely aligned even in cases with eye movement during image acquisition. In addition, the co-registration of OCT and cSLO images allows for follow-up examinations at exactly the same position and at any later point in time.

The algorithm to combine multiple images which have been captured in the same location is called ART mean (automatic real time mean). Single OCT images are averaged in real-time to decrease noise and enhance contrast within the final OCT image. While ART is active, the SNR of the image is continuously increasing with approximately the square root of the number of averaged single B-Scans, to a maximum selected by the user. As a result, faint signals elevate from the noise floor and the contrast between single retinal layers is increased.

Moreover, the inherent variability of scanning due to eye and patient motion reduces the granular pattern of speckle because of slight variations in the optical path of the OCT. Speckle reduction allows for detection of tissue structures that would otherwise be obscured by large speckle spots. The result of ART processing can be appreciated by comparing the OCT images shown in Fig. 3.9.

The ability to scan the same position repeatedly over any periods of time is of great value for disease detection, progression analysis and treatment control. Follow-up scans (FUP) are co-registered to baseline images, which allows for reliably identifying even small changes. As an example, Fig. 3.10 presents a FUP series for treatment control of wet age-related macular degeneration

**Fig. 3.9** Signal averaging using ART significantly reduces the speckle pattern and increases contrast and SNR

**Fig. 3.10** Follow-up series for treatment control of wet AMD: two follow-up images (2, 3) are co-registered to the baseline image (1). The exact same scan position allows for identifying changes. The red lines indicate identical scan locations

(AMD), showing the initial clinical finding (inset 1) and two follow-up scans several weeks after treatment (insets 2 and 3). In an exemplary manner, the red lines indicate the correspondence of scan locations within the series. The precision of the placement of the follow up scans has been evaluated by means of retinal thickness measurements on FUP examinations. A measurement reproducibility of 1 μm was confirmed [14].

For glaucoma diagnosis, thickness maps of retinal nerve fiber layer (RNFL) are derived from a combination of circle and radial OCT scans on the optic disc. Sectorial and global RNFL thickness measurements require a reliable point-topoint comparison to assess progression and to accurately compare with reference data. It is essential to remove the influence of head tilt and eye rotation for each individual scan. Moreover, it has to be taken into account that the anatomy can significantly vary among individuals. The Anatomic Positioning System (APS) creates an anatomic map of each patient's eye using two anatomic landmarks: the center of the fovea and the center of Bruch's membrane opening. All scans are aligned along this fovea-to-disc axis, and the sectors are defined relative to this axis, as is depicted in Fig. 3.11 for two individuals (Fig. 3.11a, b). As a result, sectorial analysis is less affected by anatomical diversity. This improves the classification based on the reference database and increases diagnostic precision.

APS can also reduce the influence of head tilt and eye rotation on RNFL analysis. Without APS, differences in patient alignment may impede the sectorial analysis of RNFL thickness and thereby impact the assessment of progression, which is presented in Fig. 3.11 (for the same eye, Fig. 3.11c).

The segmentation of retinal layers is a basic prerequisite for many subsequent visualization and analysis features, such as the display of retinal thickness profiles or the definition and visualization of retinal slabs between any retinal boundaries. As part of the SPECTRALIS viewing software, the segmentation editor allows for a user-defined evaluation adapted to the specific pathology.

Per default, the internal limiting membrane (ILM) and Bruch's membrane (BM) are segmented. In circle scans, the segmentation of the RNFL is displayed. Additionally, a multi-layer segmentation can be initiated, which allows to separate all visible layers of the retina. In Fig. 3.12, a multi-layer segmentation is shown together with the naming convention used throughout the software.

If the retina is affected by pathology or by poor image quality, the automatic segmentation may fail. The segmentation editor tool therefore supports manual segmentation of some scans and corrects the rest of the volume dataset accordingly.

Volumetric OCT datasets are generally many GBytes in size and must be visualized in a suitable way to support the clinician with their diagnosis. Transverse section analysis, which is available for volume scans with a minimum density, offers an intuitive view of 3D OCT data. Interpolated B-Scans that are orthogonal to the acquired B-Scans and transversal (or enface) images are generated. After multi-layer segmentation, 2D projection images of various retinal slabs are available. Average, maximum or minimum intensity projection are standard procedures in image processing to visualize 3D data. For a pre-defined volume or slab, the average, maximum or minimum OCT signal along z (depth) direction is selected and mapped to the projection image. Enface images of vitreoretinal border region, RPE and choroid generated by maximum intensity projection, are depicted in Fig. 3.13.

OCT imaging below the RPE may be impacted by the system's specific roll-off and the individual pigmentation of the RPE due to enhanced scattering. The contrast of choroidal vascular detail and the visibility of the choroidal-scleral interface (CSI) may be important in assessment of choroidal pathologies, e.g. in pachychoroid disease. To increase sensitivity in depth and thereby enhance visualization of the choroidal vascular plexus and the CSI, the SPECTRALIS allows for Enhanced Depth Imaging (EDI). Imaging of the lamina cribrosa benefits from the setting as well. For EDI, the characteristic rolloff is reversed in depth. The optimum imaging position, also called sweet spot, is moved to the lower part of the displayed OCT image. Technically, the EDI mode is realized by shifting

**Fig. 3.11** (**a**–**c**) The Anatomic Positioning System (APS) ensures that the circle scans of the ONH scan pattern are aligned along the fovea-to-disc axis for each patient individually (upper row). Without APS, the influence of head tilt and eye rotation can impede the sectorial analysis of

RNFL thickness and assessment of progression (c, lower row). The white lines through the ONH indicate the six sectors according to the Garway-Heath regions for classification of RNFL thickness

the position of the reference mirror. Deeper layers then have smaller differences in optical path length and are therefore encoded in interference fringes of lower spatial frequency: their OCT signal gains an additional SNR of 2–3 dB as it is not affected by the roll-off anymore. However, EDI cannot account for the losses induced by scattering, which may be enhanced for several pathologies and affects all layers below.

An emerging area of interest is widefield OCT. While widefield technology in other imaging modalities, such as angiography and autofluorescence is already widely used, the application of widefield OCT is still currently being adopted into clinical practice. Widefield OCT imaging may provide significant benefit in the visualization of multifocal macular disorders or in the understanding of peripheral vitreoretinal diseases.

**Fig. 3.12** Multi-layer segmentation of a retinal OCT scan with the naming convention used throughout the software

**Fig. 3.13** Enface OCT images can be calculated from OCT volume scans, which have been segmented accordingly: For each slab—vitreoretinal, RPE and choroidmaximum intensity projection along the depth direction is used to generate transversal images

Widefield OCT imaging is feasible using the SPECTRALIS wide field objective (WFO), which provides a 55° field of view, i.e. a scan length of about 12 mm. The macula, the optic nerve head and areas beyond the vessel arcades can be captured in a single B-Scan. Like standard OCT, widefield OCT can be combined with a variety of cSLO imaging modalities. Figure 3.14 presents an example for the combination of widefield MultiColor imaging with widefield OCT.

A variety of OCT scan patterns and comprehensive scan protocols allow for a systematic and workflow optimized examination. For glaucoma diagnosis and treatment control, the detection of slight changes in the RNFL thickness is essential. The RNFL of healthy eyes is visualized on OCT images as a highly reflective layer that becomes increasingly thick as it approaches the optic disc. The thickness of the peripapillary nerve fiber layer can be determined from three peripapillary circular scans (Fig. 3.15, top left)—which are defined by the scan protocol ONH-RC. The RNFL of each circle is automatically segmented (Fig. 3.15,

**Fig. 3.14** Wide-field MultiColor cSLO (left) combined with wide-field OCT provides a 55° field of view

**Fig. 3.15** Nerve fiber layer thickness analysis: Three peripapillary circular sans are placed at the optic nerve head with a fixed starting point relative to the macula position (top left inset) and in each circle scan the RNFL and ILM are segmented (top right inset). Standardized measurements include thickness in predefined segments (bottom left) and comparison of the thickness with a normative database (bottom right inset). The black line indicates the measurement of the individual patient in comparison with the average thickness for this age and population (green line) and in comparison with margins of normative data base (green—normal, yellow—borderline and red—out of normal range)

top right) and the thickness values are compared with a reference database. The results are analyzed within predefined sectors (called Garway-Heath sectors) as well as globally (Fig. 3.15, lower row).

A radial line-scan pattern is part of the ONH-RC scan protocol and allows for assessing the thickness of the neuro-retinal rim based on the detection of the disk margin. From each B-Scan, the shortest distance from Bruch's membrane opening (BMO) to the ILM is determined and indicated by a cyan arrow in the B-Scan (see Fig. 3.16, top right). The analysis is therefore called BMO-based minimum rim width (BMO-MRW). It takes into account the variable geometry of the neural tissue as it exits the eye via the optic nerve head. BMO-MRW data can be classified based on a reference database according to Garway-Heath sectors as well as globally (Fig. 3.16, lower row).

Glaucoma may also involve the loss of retinal ganglion cells around the macular region. From a dense volume scan pattern of the macula (Posterior Pole scan protocol), analysis and follow-up of the ganglion cell layer (GCL) is assessed. For each B-Scan, the GCL is segmented. The resulting thickness map is colorcoded and allows for comparing GCL thickness on a region-based approach, see Fig. 3.17.

#### **3.4 Additional OCT Contrast Mechanisms and New Technologies**

Since more than a decade structural OCT measurements have been used very successfully in clinical routine for diagnosis of retinal and neurodegenerative diseases (see Chaps. 4 and 5). OCT technology was also established for measuring and assessing structural parameters within the eye bulbus e.g. as the chamber angle or the corneal thickness (see Chap. 12), and for planning cataract and refractive surgeries (Chap. 14). Furthermore, during the last years research work investigating additional or complementary con-

**Fig. 3.16** BMO-MRW analysis: A radial line-scan pattern is placed at the ONH (top left inset) and in each B-Scan the shortest distance of the BMO endpoints to the ILM are found and indicated by the cyan arrow (top right inset). Standardized measurements include thickness in predefined segments (bottom left) and the BMO minimal rim width according to the previously found landmarks in the OCT B-Scans (bottom right inset). The black line indicates the measurement of the individual patient in comparison to margins of normative data base (green—normal, yellow—borderline and red—out of normal range)

**Fig. 3.17** Segmentation of the ganglion cell layer (GCL) and resulting color-coded thickness maps

trast mechanism based on OCT technology has been published continuously. In the following section, a short overview of the most important contrast mechanisms is given.

#### **3.4.1 OCT Angiography (OCTA)**

OCT signal originating from blood vessels shows a much larger variance compared to the OCT signal of stationary tissue. The signal alterations in the vessels are due to the flow of the backscattering particles (mainly erythrocytes). Increased variance is observed for both, for the intensity and for the phase of the complex valued OCT signal and is used to compute OCTA images.

For OCTA images B-Scans at the same position are repetitively acquired, and sophisticated mathematical and statistical algorithms were developed to discriminate vascular structures from stationary tissue based on the variance of the OCT signal. These algorithms face several challenges: The presence of fast eye movements (bulk motion) causes a signal variance also for stationary tissue, which needs to be separated from the variance caused by the retinal blood flow. Also the blood flow in larger vessels within the inner retina can cause so-called projection artefacts in the deeper vascular plexus. See also reference [15] for an overview and Chap. 6 as well as herein referenced literature for further details and ophthalmic applications of OCTA technology.

#### **3.4.2 Quantitative Measurement of Retinal Blood Flow**

Whereas in OCTA the blood flow and thus the geometry of the different vascular plexus are visualized, no quantification of the blood flow e.g. in units of microliter per second can be made. For such a quantitative assessment of the ocular blood flow, also the knowledge of the velocity profile within the vessel is required. In addition, the vessel diameter and geometry needs to be known, which can be derived from structural OCT and OCTA data. If the blood flow contains a velocity component in z-direction—as it is the case for the large vessels at the rim of the optical nerve head—this velocity component can be extracted from OCT phase measurements of consecutive A-scans. Additionally to the angledependency, these phase shift measurements are sensitive to bulk motion and to phase instabilities of the OCT system. However, for vessels which are predominantly orientated in lateral direction more sophisticated decorrelation and/or extensions of the phase-based methods are required to still be able to measure all velocity components. This is a very active field of research and promising approaches including literature references are presented in Chap. 7.

#### **3.4.3 OCT with Visible Light (Vis-OCT)**

Up to now all commercial available OCT systems used in ophthalmology operate in the near infrared wavelength range between 0.8 and 1.3 μm. Shifting the OCT light source to the visible wavelength range would imply several challenges and disadvantages, but would have also two major advantages presented in the following:

#### **3.4.3.1 Resolution**

With vis-OCT both, the lateral and especially the axial resolution of retinal OCT images could be significantly improved. The dependency of the axial resolution on the OCT wavelength was discussed already above, and from Eq. (3.5) it is obvious, that the use of a broad visible spectrum at shorter wavelength (e.g. 450–700 nm) increases the achievable axial resolution in the submicron range. It is about 8× higher, compared to a standard infrared OCT centered at 880 nm with 80 nm bandwidth.

The lateral resolution of OCT en-face images is given by the Rayleigh criterion, where *r* defines the minimum distance between two resolvable structures. Therefore, for a given numerical aperture it is proportional to the center wavelength:

$$r = \frac{0.61 \cdot \lambda}{NA} \to r = 6.8 \cdot \lambda \quad \text{for } NA \approx 0.09, (\mathbf{d}\_{\text{pull}} \approx 3 \,\text{mm}). \tag{3.9}$$

Due to the presence of optical aberrations the dilation of the pupil in general does not result in an improvement of resolution. Thus, the transition to visible light e.g. centered at 500 nm could improve the lateral resolution by a factor of 1.6 resp. 2 compared to OCT images acquired at 800 nm resp. 1000 nm center wavelength.

#### **3.4.3.2 Spectral Imaging, Oximetry**

In addition, the spectral information of the backscattered visible light could be used for spectroscopic analysis. Since the absorption curves for oxygenated (HbO2) and deoxygenated hemoglobin (Hb) show characteristic differences in the visible range, the spectral data acquired in vis-OCT could be used as well to determine the oxygen saturation of the arterial and venous blood flow [16]. The oxygen saturation is defined as the percentage of oxygen-saturated hemoglobin (HbO2) with regard to the total amount of oxygenated and deoxygenated hemoglobin. Such a measurement could give complementary information to the quantitative blood flow data, since the knowledge of arterial and venous oxygen saturation together with reliable blood flow data would allow to estimate the total oxygen supply to the retina.

Recently, promising results obtained with experimental vis-OCT systems have been published [16, 17]. But this technique also comes along with limitations and technical challenges: the optical design needs to be carefully corrected for chromatic aberrations and cost-efficient broadband light sources are not commercially available. In addition, there are fundamental problems: due to the potential photo-chemical action of blue light, the laser exposure limits are very strict, resulting in reduced sensitivity of the OCT system. Visible light also causes bleaching of the photo pigments and appears very bright for the patient leading to considerable discomfort. The most important limitation of vis-OCT is most likely the inaccessibility of structures below the intact RPE, due to its strong absorption of visible light. Therefore, the vascular plexus within the choroidea (choroidocapillaris and larger choroidal arteries and veins) cannot be imaged and assessed with vis-OCT resp. vis-OCTA.

#### **3.4.4 OCT Elastography (OCE)**

Many ocular diseases are associated with a change of mechanical tissue properties. Examples are keratoconus (mechanical properties of the cornea), presbyopia (stiffening of the lens), glaucoma (often associated with a stiffening of the sclera and/or lamina cribrosa), arteriosclerosis (reduction of vessel elasticity) and others. Therefore, a reliable, non-contact method to in-vivo measure the elasticity of different ocular tissues species can potentially have a big impact for early diagnosis of these diseases, maybe even before structural changes can be detected with conventional OCT.

Brillouin scattering [18] is one approach to measure the Young (or shear) modulus, which characterizes the tissue elasticity. Unfortunately, due to the extremely small wavelength shift, a laser with a very narrow spectral band is needed, and high technical effort is required to separate the Brillouin shifted backscattered photons from elastically backscattered background and to measure the small wavelength shift.

A different very promising approach is OCE [19]. The review article from Kirby et al. gives a comprehensive introduction and overview to the technology [20]. For OCTE measurements a mechanical load is applied to the tissue under examination. The tissue will respond to the applied stress, and its displacement is measured by an appropriate OCT system. Different methods to provide the mechanical load have been proposed and tested. The mechanical load can be applied as static force, as sinusoidal vibration, or as a fast transient, resulting in different requirements for the imaging system. For clinical use, a non-contact technique would be highly preferable; the most promising methods are the application of an air-puff [21, 22] or the excitation by focused ultrasound [20, 23].

#### **3.4.5 Polarization Sensitive OCT (PS-OCT)**

Polarization sensitive OCT is an extension to standard spectral domain or swept source OCT. Additional contrast is provided by measuring and evaluating the change of the polarization state of the backscattered probe light due to the interaction with the tissue under examination. PS-OCT can be also considered as an improvement of the previous SLO-based approach of scanning laser polarimetry (SLP). Since in PS-OCT the depth of the backscattered light is exactly known, the effects of different polarization changing tissue layers along the beam path can be separated properly. This has always been a problem of SLP technology, where on one hand the birefringence of the cornea needs to be corrected and on the other hand the RNFL birefringence data could be corrupted by the birefringent contribution of deeper layers as the sclera, causing so-called atypical RNFL patterns [24].

Mathematically the polarization of the light and the polarization changing properties of a tissue sample can be described with the Jones formalism, which is discussed in detail in the review paper from de Boer, Hitzenberger and Yasuno [25]. This formalism considers the electrical field vector E to describe the polarization state of the electromagnetic light wave propagating in *z*-direction. *Ex* and *Ey* are its complex components. The polarization changing properties of a medium are described by the Jones matrix *J*, which is a 2 × 2 component matrix with complex values.

In case of n consecutively transmitted tissue layers, the resulting Jones matrix *J* can be written as the product of n individual Jones matrices *J = Jn* ∙ *Jn−1* ∙ *….*∙ *J1*. Thus, the polarization state *E*¢ of the transmitted light wave is then given by:

$$\overrightarrow{\mathbf{E}}' = \begin{pmatrix} \mathbf{E} \cdot \\ \mathbf{E} \cdot \end{pmatrix} = \mathbf{J} \cdot \overrightarrow{\mathbf{E}} = \begin{pmatrix} \mathbf{J}\_{11} & \mathbf{J}\_{12} \\ \mathbf{J}\_{21} & \mathbf{J}\_{22} \end{pmatrix} \cdot \begin{pmatrix} \mathbf{E}\_{\mathbf{x}} \\ \mathbf{E}\_{\mathbf{y}} \end{pmatrix} = \mathbf{J}\_{\mathbf{n}} \cdot \mathbf{J}\_{\mathbf{n-1}} \cdot \dots \cdot \mathbf{J}\_{2} \cdot \mathbf{J}\_{1} \cdot \overleftarrow{\mathbf{E}} \tag{3.10}$$

The amplitude and the relative phase of the two components *E*´*x* and *E*´*y* after passing through the media are measured for two different initial polarization vectors *E* . Thus, the equation above yields four complex equations, which allow for calculating the complex components of the Jones matrix *J*. The retardation as well as the orientation of the prevalence axis and also the dichroism of the transmitted layer can be calculated from the Jones matrix (see [25]).

In general, four measurements (amplitude and phase) are required in order to determine the complex components of the Jones matrix: two linear independent polarization vectors are applied to the sample and two detection units measure the components of two orthogonal (or at least linear independent) polarization states. In most of the recently published papers on PS-OCT systems, swept source technology has been used for two reasons (see e.g. ref. [26]). Firstly, SS-OCT systems usually have an axial imaging range of several millimeters. Therefore, the two required input polarization states can be elegantly encoded in one single A-scan. The two orthogonal polarization states are divided into separate beam paths, one component is delayed (typically by half of the z-range, i.e. 2–3 mm) and finally the two polarization states are recombined. The two delayed wavelength sweeps with different polarization vectors are then scanned over the retina. Secondly, the detection unit can be easily duplicated for SS-OCT systems. The optical set-up usually is arranged in a way, that in total four pin photodiodes form two balanced detection units, each detecting the signal for a different (linear independent) polarization vector.

Thus, the four independent measurements required in the Jones formalism can be very efficiently extracted from one A-scan, which is recorded in two different polarization dependent detection channels. Both channels yield information about the two depth encoded input polarization states by separately evaluating the upper and lower half of the total OCT z-range.

Several effects can cause a change of the polarization: In biological tissue especially the so-called form birefringence plays an important role. In birefringent tissue the refractive index depends on the orientation of the polarization of the incident light. Thus, the component polarized along the slow axis experiences phase retardation with respect to the component polarized parallel to the fast axis. If the difference of the refractive index ∆*n* is known, measuring the retardation *δ* allows for determining the thickness of the birefringent layer. In addition, the angle *θ* of the prevalence axis yields information about the orientation of the anisotropic tissue.

Form birefringence is caused by rod-like structures having a spacing of the rods smaller than the wavelength. This is the case for axon bundles within the RNFL, which are visualized in the retardation map of Fig. 3.18. Another example for form birefringence in the retina is the Henle fiber layer around the macula. This birefringent contrast can be used also for differ-

**Fig. 3.18** OCT "en-face" image of a young healthy volunteer is shown in the left image (**a**). In (**b**) the double pass phase retardation map, which was calculated from the ps-OCT dataset, is displayed. It shows a strong retar-

dation signal for the superior and inferior RNFL bundles. Note that there is also around the fovea some birefringence, which is caused by the radially orientated Henle fibers (adapted from reference [26])

**Fig. 3.19** PS-OCT B-Scan through the fovea of a human retina: intensity image (**a**) and DOPU (degree of polarization uniformity) contrast (**b**), corresponding to the color scale from black (DOPU = 0) to red (DOPU = 1). The inner retina shows no or only very little depolarization effects and therefore the DOPU value is close to 1 (orange to red pixels). In deeper layers the situation is different: backscattered light from the end tips of the photoreceptors

(ETPR), the RPE and Bruchs membrane is to an important part statistically polarized resulting in significantly lower DOPU values (yellow, green and blue pixels). To avoid erroneous polarization data, areas below a certain intensity threshold are displayed in gray. Image size: 15° (horizontal) × 0.75 mm (vertical, optical distance) (adapted from reference [27])

entiation of tissue species and improved segmentation of tissue layers.

Another polarization changing effect is dichroism, which describes the polarization dependent attenuation of light due to polarization dependent absorption within the tissue. This effect is of less importance in the living human eye [25], and therefore will not further be discussed in this chapter.

Finally, strongly scattering tissue has a depolarizing effect, i.e. the polarization is scrambled. Since within the healthy human retina mainly the retinal pigment epithelium (RPE) shows such strong depolarizing properties, this effect has been used in the past to improve the imaging contrast and segmentation of the RPE layer (see Fig. 3.19), as well as to detect the absence of RPE in patients with geographic atrophy [28].

In addition, the migration of RPE cells into the inner retina layers can be visualized by displaying B-Scans with depolarization contrast (see Miuara et al. [29]). However, it should be noted, that the degree of depolarization cannot be directly calculated for a single pixel, since due to the coherent detection scheme, the OCT signal for each pixel is fully polarized, and thus the depolarization resp. the degree of polarization cannot be measured. However, in case of strong depolarizing tissue, the polarization of adjacent pixels is completely uncorrelated and therefore varies statistically. Thus, when averaging the polarization over a sliding, localized area (kernel), the mean polarization is close to 1 for polarization preserving structures, whereas it is considerably reduced for polarization scrambling tissue. The Hitzenberger group introduced for this mean polarization parameter the denomination "DOPU", which stands for degree of polarization uniformity [27].

#### **3.4.6 Adaptive Optics OCT (AO-OCT)**

Adaptive optics is a concept to improve the resolution of an optical imaging instrument by actively compensating the static and dynamic aberrations of the optical system (see also Chap. 17). In retinal imaging, the diffraction-limited optical resolution is determined by the numerical aperture of the human eye. The given focal length of the eye and the pupil diameter, which can be dilated at maximum to about 8 mm, restrict the NA to approximately 0.24. Thus, theoretically a resolution of about 2 μm (Rayleigh criterion) can be achieved for 840 nm. However in practice, the optical resolution is reduced for dilated pupils, since the optical aberrations of the human eye rapidly increase with the pupil diameter. The deterioration of the optical systems outweighs the theoretical benefit of a higher NA. Therefore, in commercial OCT systems typically a beam diameter of 2 mm is entering the eye, resulting in a lateral resolution of about 9 μm.

The goal of adaptive optics is to actively compensate the optical aberrations by means of an adaptive optical component, as e.g. a deformable mirror, and thus realize the diffraction limited resolution also for fully dilated pupils.

In contrast to the lateral resolution, the axial resolution of OCT systems is independent of the numerical aperture and can be optimized to 3 μm as demonstrated in the past [30, 31], by increasing the bandwidth of the light source. Thus, with dilated pupils >6 mm and distortion compensating adaptive optics as well as an improved axial resolution an isotropic point spread function with a width of <3 μm in all dimensions could be achieved, which would enable measurements on a cellular level. Such a resolution enhancement has been demonstrated in complex laboratory set-ups, including adaptive optics with a wave front measurement in order to provide an online feedback mechanism to control the adaptive correction element. In several studies this technique has been used for different applications in the retina, for example for in-vivo investigation of photoreceptor disc shredding [32] and for visualization of micro-capillaries [33].

Since the technical effort and the costs for the realization of adaptive optics OCT systems are considerably high, computational approaches have been recently proposed to numerically correct OCT data for optical aberrations. The basis for this approach is the acquisition of phase stable volumetric OCT data. On the basis of en-face OCT data the complex pupil function, which contains information about the wave front distortions, can be calculated and numerically corrected. Finally, the corrected pupil function is used to recalculate the aberration corrected enface images of the retina. With this method in principal also the defocus caused by the Gaussian beam profile along the z-axis can be numerically compensated. For more details the reader may refer to the references [34–36].

#### **3.4.7 High Speed OCT**

Since the beginning of the clinical use of OCT imaging devices in ophthalmology, the acquisition speed was constantly increased with the goal to minimize the time required to record a complete and dense 3D volume stack of the retina. The reasons for the demand for high speed OCT systems are manifold: Eye movements, as microsaccades and ocular drifts are present even for fixating eyes. They interfere with the scan pattern and cause artefacts. Sophisticated eye tracking algorithms can eliminate these artefacts by detecting the eye movement, rejecting the corrupted data, repositioning the scan system, and reacquiring the rejected scans. However, this usually results in prolonged acquisition times especially for patients with poor fixation ability. Higher acquisition speeds can be used to either increase the A-scan density with the benefit of an improved digital lateral resolution or to extend the field of view further to the periphery without increasing the acquisition time. The downside for high speed scanning OCT is the reduced illumination time, which in general results in a decrease of sensitivity.

In order to achieve A-scan rates in the MHz rate, the following two approaches have been investigated:

#### **3.4.7.1 Fourier Domain Mode Locked (FDML) Lasers with MHz Sweep Rate**

The use of so-called frequency domain modelocked lasers as OCT light source was first proposed by Huber, Wojtkowski and Fujimoto in 2006 [37]. FDML lasers usually consist of a fiber-based ring resonator, a semiconductor optical amplifier (SOA), a tunable bandpass filter, fiber-based components as polarization controller, isolator, and laser output coupler. Klein et al. used such a swept source FDML laser at 1050 nm center wavelength with an A-scan rate of 6.7 MHz to acquire a dense ultra-wide field fundus OCT volume within 0.3 s [38].

#### **3.4.7.2 Parallelization of OCT Data Acquisition**

Another approach to reduce imaging time for dense OCT volumes is based on lateral parallelization of the acquisition by simultaneously capturing A-scans at multiple locations of the sample. The most common parallelization technique is the line-field approach, which has been demonstrated for SD-OCT as well as for SS-OCT systems. Here, instead of a scanned focus spot a complete line is projected onto the sample. Therefore, only one scanner is required to scan a 2D area. In SD-based line-field OCT systems, a two dimensional detector (CCD or CMOS chip) is used. One dimension (line of pixels) samples the illuminated line on the tissue and the other dimension (column of pixels) is used to measure spectrallyresolved the interference fringe pattern referring to the corresponding line pixel. Thus, the read-out of the 2D detector yields the complete information for the entire B-Scan. Experimental line-field SD-OCT systems have been demonstrated for retinal [39] and corneal [40] imaging.

In line-field SS-OCT systems only a 1D line camera is used to image the illuminated line on the sample. The line camera is read out after each step of the wavelength sweep, i.e. after one complete sweep, the data for a full B-Scan are acquired. In ref. [41] a line-field SS-OCT system is described, which enabled the acquisition of volumetric OCT data at an effective A-scan rate of up to 1 MHz.

Finally, the line-field SS-OCT technique can be applied also to full-field SS-OCT just by replacing the line detector by a 2D image sensor. One wavelength sweep then provides the OCT data for a complete 3D volume stack (see Chap. 8 and [42, 43]).

#### **3.5 Summary and Conclusion**

OCT is an extremely valuable imaging technique to generate cross-sectional images with high axial resolution for tissue diagnosis. It is especially useful in ophthalmology as the transparency of the ocular media allows for imaging of the retina even at the back of the eye. Therefore, not only the first demonstration in the laboratory in 1991 was in eye, but also the first commercial device was an ophthalmic device entering the market only 5 years later.

In the last 25 years, a tremendous development of OCT technology took place. New OCT variants, moving from time-domain acquisition to frequency-domain measurement of spectral interference, allowed for an enormous increase in acquisition speed and at the same time an increase of tissue contrast in the images. This was the starting point of usage of OCT in daily clinical practice in ophthalmology. OCT was combined with confocal scanning laser ophthalmoscopes, featuring various fluorescence imaging techniques, into multimodal imaging platforms like the SPECTRALIS. The insights gained into the course of retinal diseases and glaucoma could be incorporated into numerous diagnostic tools.

Beyond structural imaging, the OCT signal can be further analyzed to enable functional imaging of tissue. One technique is OCT angiography which can visualize the blood vessel network. As it works contactless and does not require any dye, it got accepted as a clinical imaging tool very fast. Other techniques like PS-OCT, detecting tissue birefringence or OCT electrography for measurement of mechanical tissue properties have also shown great potential in experimental settings. OCT with visible light carries the potential of significant increase in axial resolution and the additional information of oxygenation measurement as a metabolic biomarker. However, the considerable increase in hardware complexity on the one hand and drawbacks like tradeoffs regarding penetration depth or imaging speed on the other have so far hindered development of commercial devices.

Nevertheless, it can be assumed that OCT development will continue and that either the availability of new components or the finding of an unconditional clinical benefit will lead to the breakthrough of both the methods described here that have not yet been commercially implemented and those still unknown today.

#### **References**


field optical coherence tomography. Opt Express. 2013;21(9):10850–66.


optical coherence tomography. Opt Express. 2007;15(12):7103–16.


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Ophthalmic Diagnostic Imaging: Retina**

Philipp L. Müller, Sebastian Wolf, Rosa Dolz-Marco, Ali Tafreshi, Steffen Schmitz-Valckenberg, and Frank G. Holz

#### **4.1 Introduction**

In the past decades, optical coherence tomography (OCT) has been established as one of the most important imaging modalities in clinical practice for the diagnosis and follow-up of patients with retinal diseases, as well as a source for outcome measurements in clinical trials. Using backscattered light waves from the retina that interfere with a reference beam, it enables an *in-vivo* depth profile of the tissue. Modern improvements of this interferometry technique achieve non-invasive visualization of chorioretinal structures close to histology with an axial resolution of under 7 μm (Fig. 4.1) [1, 2].

Moorfields Eye Hospital, NHS Foundation Trust, London, UK

S. Wolf Department of Ophthalmology, University of Berne, Berne, Switzerland

R. Dolz-Marco Heidelberg Engineering, Heidelberg, Germany

Unit of Macula, Oftalvist Clinic, Valencia, Spain

A. Tafreshi (\*) Heidelberg Engineering, Heidelberg, Germany

S. Schmitz-Valckenberg · F. G. Holz Department of Ophthalmology, University of Bonn, Bonn, Germany

The first commercially available OCT devices were based on time-domain detection that featured rather low scan rates of 400 A-scans per second leading to possible errors associated with eye motion and reduced measurement accuracy as well as reproducibility (Fig. 4.1a). Nevertheless, it became widely accepted for the assessment of various retinal diseases [3, 4]. Subsequently, the spectral domain (SD) and swept source (SS) imaging technologies have dramatically improved sampling speed and signal-to-noise ratio by using a highspeed spectrometer that measures light interferences from all time delays simultaneously or a tunable frequency swept laser light source (that sequentially emits various frequencies in time) and photodetectors instead of a spectrometer to measure the interference, respectively [5].

For SD-OCT devices, technical improvements has enabled scan rates up to 250,000 Hz in commercially available devices [6, 7]. The Spectralis® device by Heidelberg Engineering (Heidelberg, Germany) was the first commercially available SD-OCT device that combines the OCT technique with a confocal scanning laser ophthalmoscope (cSLO) using a near-infrared laser light source (815 nm, Fig. 4.1b). The cSLO features simultaneous eye-tracking based on a retinal fundus reflectivity image, enabling accurate and repeatable alignment of OCT images, advanced noise reduction and an auto rescan function for precise placement of follow-up scans [8].

Commercial SS-OCT devices employ a longer wavelength (>1050 nm) laser light source and

**4**

<sup>©</sup> The Author(s) 2019 87 J. F. Bille (ed.), *High Resolution Imaging in Microscopy and Ophthalmology*, https://doi.org/10.1007/978-3-030-16638-0\_4

P. L. Müller

Department of Ophthalmology, University of Bonn, Bonn, Germany

**Fig. 4.1** Evolution of OCT imaging in retinal diagnostics. The different generations of OCT imaging devices in exemplary healthy subjects is demonstrated. (**a**) The time-domain OCT (Stratus OCT, Carl Zeiss Meditec, Jena, Germany) enables the investigator to get a 3-dimensional impression of the retina and retinal layers despite restricted axial and lateral resolutions. Therefore, retinal pathologies can more easily be localized and followed up. (**b**) The spectral-domain OCT (Spectralis®, Heidelberg Engineering, Heidelberg, Germany) combines the OCT technique with a confocal scanning laser ophthalmoscope for eye tracking (left) and distinctly improves resolution and sampling speed, allowing for seg-

have scan rates as fast as 200,000 Hz. The longer wavelengths is thought to enhance visualization of subretinal tissue and choroidal structures (Fig. 4.1c) [9–13]. Similar effects are aspired by techniques like image averaging and/or enhanced depth imaging (Fig. 4.1d).

The astounding clinical implications and the numerous potential research applications have led to the rapid acceptance and integration of OCT and cSLO technology in the ophthalmic community. Ongoing improvements of the technologies will further deepen the understanding of the physiology and pathophysiology of various retinal conditions as a prerequisite for—the development and approval of new therapeutic approaches. This chapter aims to review the role of OCT diagnostics in retinal conditions, with particular emphasis on differential diagnoses as well as monitoring of progression and therapeutic outcomes.

#### **4.2 Application of OCT in Retinal Diagnostics**

OCT technology has revolutionized modern ophthalmology during the last decades. By now, OCT is widely used in clinical practice and trials,

mentation of individual retinal layers (colored lines at the right). These improvements have made OCT one of the most important diagnostic devices for differential diagnosis, determination of progression or treatment effects, and treatment indication in clinical routine as well as in study environments. However, due to imaging wavelength, visualization of deeper structures (i.e. choroid) may be limited. Other OCT imaging techniques like (**c**) the swept-source OCT (PLEX Elite 9000, Carl Zeiss Meditec, Dublin/CA, USA) or (**d**) the enhanced depth imaging mode enhance the visualization of subretinal tissue while detail of superficial retinal layers is reduced

as it is a noninvasive, quick and reproducible imaging modality. Advancements in OCT technology have improved the differential diagnosis, the knowledge of the physiopathology, and the ability to monitor disease progression as well as therapeutic effects. Diagnostic capabilities will be reviewed across a range of retinal conditions, including common diseases such as age-related macular degeneration (AMD), diabetic retinopathy, retinal vascular diseases, and rare retinal diseases including hereditary dystrophies. The depth resolution of individual retinal layers allows for localization of altered structures, enabling differentiation of diseases affecting the outer retina from pathologies that primarily impact on the inner retina. The precision and accuracy of the technology further allow for visualization and clinical assessment of subtle structural alterations or different disease stages.

#### **4.2.1 Age-Related Macular Degeneration**

In the developed world, AMD is the leading cause of irreversible visual impairment in adults with an age over 60 years [14]. OCT imaging allows for a 3-dimensional visualization and assessment of the integrity or disruption of each individual retinal layer, providing a precise detection of early changes, both in the atrophic and the neovascular spectrum of the disease [14].

The clinical hallmark of AMD is the deposition of acellular, polymorphous material between the retinal pigment epithelium (RPE) and Bruch's membrane ('drusen') as well as the appearance of pigmentary changes (hyper- and hypopigmentation) [15]. AMD-related drusen can be differentiated into soft drusen and cuticular drusen by combining OCT and cSLO imaging characteristics. Other deposits located above the RPE-Bruch's membrane band correspond with reticular pseudodrusen (Fig. 4.2) and acquired vitelliform lesions [16]. Soft drusen are represented by discrete areas of RPE elevation with variable reflectivity, reflecting the heterogenic composition of the underlying material (Fig. 4.2a) [17, 18]. Large confluent drusen may sometimes be accompanied by fluid accumulation under the retina that is seen in the depression between drusen. Ruling out the presence of choroidal neovascularization is important in order to avoid unnecessary treatment with anti-angiogenic therapies [19], and OCTangiography (described in Chap. 6) images may be useful in these challenging cases. Drusen can further be accompanied by discrete changes in the overlying neurosensory retina including disruption of the ellipsoid zone band, the external limiting membrane, thinning of the outer nuclear layer or intraretinal pigment clumping and migration that can be visualized by OCT [18, 20].

Cuticular drusen were first described as 'basal laminar drusen' by Gass in 1974 as numerous, small, round, uniformly sized, yellow, sub-RPE lesions that show early hyperfluorescence on fluorescein angiography resulting in a "starry night" appearance [21, 22]. The ultrastructural and histopathological characteristics of cuticular drusen are similar to those of hard drusen, however, their lifecycle and macular complications are more comparable with those of soft drusen [23]. On OCT, cuticular drusen are classically described as a saw-tooth elevation of the RPE with rippling (and occasional disruption) of the overlying ellipsoid zone band and the external limiting membrane (Fig. 4.2b) [24].

Reticular pseudodrusen were first described in 1990 as a peculiar yellowish pattern in the fundus of AMD patients, and in 1991 as an ill-defined network of broad interlacing ribbons [25, 26]. OCT enabled an improved characterization of reticular pseudodrusen (Fig. 4.2c) showing that these lesions correspond to granular hyperreflective material between the RPE and the ellipsoid zone band. As a result, the term 'subretinal drusenoid deposits' has been proposed [27].

Drusen may be accompanied by acquired vitelliform lesions that are believed to occur as a result of RPE dysfunction leading to impaired photoreceptor outer segment turnover. Acquired vitelliform lesions are clinically apparent as yellowish material and mimic the appearance of choroidal neovascularisation (CNV) on fluorescein angiography. In OCT imaging, the subretinal heterogeneous material is well separable from fluid [28]. In some cases, the RPE phagocytoses the subretinal material leading to either a resolution of the lesion or an atrophy of the RPE and the outer retinal layers. However in other cases, a conversion into a neovascular form is seen (Fig. 4.3) [27, 28].

It has been shown that drusen diameter and volume are a significant risk factor for progression to advanced AMD. Therefore, early and intermediate AMD is differentiated inter alia by smaller and larger than 125 μm drusen size, respectively [16]. As manual analysis of drusen on color fundus images is not reliable and practical, efforts are underway to use OCT for automated detection and quantification of drusen size, area, and volume. This may help to identify patients at high risk of disease progression and to institute appropriate upcoming prophylactic interventions [27].

Late AMD forms include macular atrophy and neovascular AMD. Macular atrophy is defined by areas of RPE atrophy that are accompanied with loss of photoreceptors and varying degrees of choroidal impairment, in the absence of neovascularization, the term geographic atrophy (GA) is frequently used [29]. On OCT, GA appears as areas of sharply demarcated choroidal hyperreflectivity from loss of the overlying RPE associated with thinning or loss of the outer retinal layers and eventually choroidal thinning that

#### **a**

Soft drusen

**Fig. 4.2** (**a**–**c**) Subtypes of AMD related drusen. From left to right, fundus color, fundus autofluorescence, and optical coherence tomography images of soft drusen, cuticular drusen and reticular pseudodrusen are shown. Source: Gliem

M et al.: Quantitative Fundus Autofluorescence in Early and Intermediate Age-Related Macular Degeneration. *JAMA Ophthalmology*. 2016. Reprinted with permission. This figure is not covered by the CC BY license

can be tracked over time with this technique [30, 31]. As OCT imaging is not affected by macular pigment, the reproducibility of GA progression measurements, especially in patients with foveal sparing disease manifestation, is preserved (Fig. 4.4) [27]. Furthermore, OCT enables the imaging of subtle changes as regressing drusenoid material, islands of preserved photoreceptors within GA or in the junctional zone, and even preapoptotic stage of neuronal cellular elements can be clearly visualized [32]. Evaluation of choroidal alterations and junctional zones of GA

**Fig. 4.3** Acquired vitelliform lesions. Acquired vitelliform lesions are localized to the subretinal space (**a**, **b**). in progression, the subretinal material may be phagocytosed and the acquired vitelliform lesions may seem to contain subretinal

fluid on OCT (**c**, **d**). Source: Keane P et al.: Evaluation of Age-related Macular Degeneration With Optical Coherence Tomography. *Survey of Ophthalmology*. 2012. Reprinted with permission. This figure is not covered by the CC BY license

on OCT and cSLO images further provide insight into the pathogenesis of GA and the relative roles of choriocapillaris, RPE and photoreceptors in the initiation and propagation of this condition. This allows for definition of future treatment targets as well as estimation of individual progression speed [33–35].

In neovascular AMD, abnormal blood vessels develop from the choroidal circulation (choroidal neovascularization) or, from the retinal circulation (retinal angiomatous proliferation, RAP) [36, 37]. Based on a histological and OCT classification, anatomical classification was proposed coining the terms type 1, type 2 and type 3 neovascularization (NV). Type 1 NV is located between the RPE band and Bruch's membrane, Type 2 NV is located above the RPE band in the subretinal space, and Type 3 NV is originated from the deep capillary plexus of the retina and located in the outer retinal layers. The proliferation of the immature vessels results in fluid exudation and hemorrhage, leading to the formation of cystoid lacunae between the RPE and Bruch's membrane (retinal pigment epithelial detachment, PED), between the neurosensory retina and the RPE (serous retinal detachment), and within the retinal extracellular space (intraretinal fluid; Fig. 4.5a) [27]. The associated invasion of fibroblasts result in disciform scar formation with loss of the RPE and overlying photoreceptors and significant disorganization of the overlying retinal architecture [38]. By using OCT, each of these

**Fig. 4.4** Geographic Atrophy. Foveal sparing geographic atrophy is demonstrated by (**a**) fundus autofluorescence imaging (excitation wavelength, 488 nm) and (**b**) OCT. Due to shadowing of macular pigment, the affection of the fovea in fundus autofluorescence images may be difficult to determine. In OCT images, area of geographic

atrophy is well demarcated due to choroidal hyperreflectivity. Source: Lindner M et al.: Directional Kinetics of Geographic Atrophy Progression in Age-Related Macular Degeneration with Foveal Sparing. *Ophthalmology*. 2015. Reprinted with permission. This figure is not covered by the CC BY license

**Fig. 4.5** Neovascular AMD. In OCT imaging of neovascular AMD, pigment epithelium detachments (arrow) appear as elevations of the RPE band relative to Bruch's membrane, subretinal (asterix) and intraretinal (arrowhead) fluid as transparent lacunae associated with leakage

in fluorescence angiography (left, **a**). As treatment effect of antiangiogenic therapy is highly visible in 3-dimensional OCT images, OCT has become the gold standard for therapy monitoring (**b**)

disease-associated changes can be visualized in a 3-dimensional manner. Therefore, treatment indications as well as anti-angiogenic treatment effects can be evaluated much more objectively and precisely than with summation images as provided by invasive fluorescence angiography alone, making the combination of OCT and fluorescence angiography the gold standard imaging strategy for diagnosing neovascular AMD (Fig. 4.5b) [39]. Other diseases associated with clinical macular edema, including central serous chorioretinopathy (CSCR) and polypoidal choroidal vasculopathy (PCV), can further be differentiated more easily from neovascular AMD as they differ in OCT appearance (e.g., thicker choroid) [40]. This might be of specific importance in retinal diseases that are not responding to antiangiogenic treatment (see following subchapters).

#### **4.2.2 Diabetic Retinopathy and Macular Edema**

Worldwide, diabetic retinopathy is the leading cause of visual impairment in the working-age population. Similar to AMD, diabetic retinopathy is assessed by a multimodal approach, especially as the pathogenesis and clinical features are primarily attributed to retinal vascular damage. Thus, fluorescein angiography plays a key role in the diagnosis of the disease. Recent OCT findings indicate that choroidal angiopathy may also be involved, providing further insight into the pathogenesis of diabetic retinopathy. Choroidal thinning is present in patients with diabetic retinopathy and related to disease severity (Fig. 4.6). Therefore, choroidal thickness analysis using OCT may be an important parameter to assess the severity of diabetic retinopathy [41–43].

As macular edema is one of the major complications of diabetic retinopathy, well treatable with laser treatment, anti-angiogenic, steroid therapy or a combination of those, a reliable diagnostic and treatment monitoring module is needed [44]. The combination of OCT imaging and fluorescence angiography has become the gold standard imaging strategy in diabetic macular edema, providing high-resolution 3-dimensional retinal information [45–47].

#### **4.2.3 Retinal Vascular Occlusions and Other Vascular Conditions**

In retinal vascular disease, it is undisputed that fluorescence angiography is the diagnostic gold standard. However, macular edema caused by excessive VEFG production may occur. In these cases, laser treatment, intravitreal dexamethasone or antiangiogenic injections have been shown to stabilize and even improve the anatomy as well as the visual acuity of these patients [48].

**b c Fig. 4.6** OCT features of diabetic retinopathy. The OCT

images of nonproliferative diabetic retinopathy (**a**), proliferative diabetic retinopathy (**b**), and diabetic macular edema (**c**) revealed thinner choroid. Of note, the latter reveald most diffuse choroidal thinning. Proliferative diabetic retinopathy showed paracentral loss of mainly inner retinal structures. Red arrows highlight the choroid–sclera interface. Focal thinning is indicated by green arrows. Source: Adhi M & Dunker J: Optical coherence tomography--current and future applications. *Current opinion in ophthalmology*. 2013. Reprinted with permission. This figure is not covered by the CC BY license

For treatment monitoring as well as evaluation of prognosis, OCT is of great value as it provides 3D structural information concerning the involved area and the severity (Fig. 4.7). In eyes with macular edema secondary to retinal vein occlusions, OCT images may show the presence of hyporeflective spaces within the retinal nerve fiber layer that can predict the presence of retinal non-perfused areas, as well as the status of the photoreceptor layer that directly correlates with the visual acuity. In cases showing arterial ischemia, location of retinal hyperreflectivity involving the middle retinal layers may locate the ischemic injury involving the deep capillary plexus as seen in paracentral acute middle maculopathy.

**Fig. 4.7** Retinal vein occlusion. Similar to neovascular AMD, treatment effect of steroid and antiangiogenic therapy for macular edema secondary to retinal vein occlusion is highly visible in 3-dimensional OCT images. Therefore, multimodal assessment of OCT and fluorescence angiography is current gold standard for this entities

#### **4.2.4 Central Serous Chorioretinopathy and Related Diseases**

Central serous chorioretinopathy (CSCR) is typically characterized by a serous retinal detachment in the acute phase, thought to be caused by a generalized disruption of the choroidal vasculature with diffuse hyperpermeability [49]. In OCT imaging, an elevation of the neurosensory retina from the RPE is present, associated with a significant increase in the thickness of the choroid and focal dilation of large choroidal vessels ('pachyvessels') [2]. The latter finding implies the pathophysiologic role of hydrostatic pressure in choroidal vessels and distinguishes CSCR from other causes of subretinal fluid, indicating the need and importance of OCT assessment of choroidal thickness. CSCR usually resolves spontaneously within a few months. However, some patients demonstrate a chronic form with persistent subretinal fluid and eventual permanent visual loss. These cases might further develop secondary CNV requiring prompt diagnosis to avoid delayed treatment. Even in the absence of CNV, chronic forms of CSCR may require intervention with treatments such as laser photocoagulation and photodynamic therapy (PDT). Recent data showed a significant reduction in choroidal thickness following PDT (Fig. 4.8) [50]. Given the widespread use of PDT for the treatment of chronic CSCR, analysis of choroidal thickness by OCT may be a parameter to assess for disease activity following treatment [2].

#### **4.2.5 Pathologic Myopia**

Eyes with pathologic myopia (refractive error of at least −6 diopters and/or axial length greater than 26.5 mm) are at high risk for developing retinal abnormalities. The examination of myopic fundus is challenging due to extreme thinning of retinal and choroidal tissue thus an accurate and complete evaluation may only be performed with high-resolution imaging including OCT. Common findings in pathologic myopia are chorioretinal atrophy (diffuse or patchy), tractional changes (macular holes, epiretinal membranes, retinal schisis, microvascular folds and vascular avulsions). In some cases, the shape of the presents altered known as 'staphylomas'. All these findings can be detected and carefully assessed with OCT scans. NV occurs in 5–11% of patients with pathologic myopia and is the most common form of exudative disease, within the first four decades of life [51]. OCT in eyes with pathologic myopia is useful to determine the presence of NV and to monitor the treatment effects. OCT imaging also allows for an accurate differential diagnosis of findings such as subretinal fluid in dome shape maculopathy (Fig. 4.9) [52].

#### **4.2.6 Inherited Retinal Diseases and Other Macular Conditions**

Among many other inherited disease, Sorsby fundus dystrophy secondary to mutations in TIMP3 (autosomal dominant) and pseudoxanthoma elasticum secondary to mutations in ABCC6 (autosomal-recessive) are frequently associated with NV. In these cases, OCT has become standard procedure for diagnosis, assessment of disease severity, indication for treatment and to determine individual progression rates (Fig. 4.10) [53, 54].

**Fig. 4.8** Central serous chorioretinopathy. Horizontal OCT scan from the right eye of a patient with central serous chorioretinopathy before (above) and after (below) verteporfin photodynamic therapy. Treatment was followed by resolution of the subfoveal fluid and by a reduction of the disease associated choroidal thickening, as measured along the red lines indicating the inner and outer borders of the choroid and

Another disease that might be associated with NV is macular telangiectasia type 2. Using OCT thickness measurements (often in combination with fluorescein angiography), NV lesions are differentiable from degenerative changes that are shown at the bottom as a function of distance. The vertical green line indicates the location of the centre of the fovea. Source: Pryds a & Larsen M: Choroidal thickness following extrafoveal photodynamic treatment with verteporfin in patients with central serous chorioretinopathy. *Acta Ophthalmologica*. 2012. Reprinted with permission. This figure is not covered by the CC BY license

regularly seen within the natural progression of this disease [53].

Apart from evaluation of NV and treatment effects, OCT has a significant value in the assessment and differential diagnosis of inherited **Fig. 4.9** Myopia. The second most common form of CNV occus secondary to myopia magna. While fluorescence angiography shows only slight leakage at the border of the chorioretinal atrophy (red arrow), associated OCT reveals inhomogeneous material breaking through outer retinal layers with subretinal fluid (green arrow, **a**). However, subretinal fluid may also be present in myopic eyes with special configuration, called 'dome shape maculopathy' that is often only visible in vertical scans and not responding to antiangiogenic therapy (**b**)

retinal diseases. Recent studies using OCT have provide a new insight regarding the amount of choroidal involvement in the pathogenesis of retinitis pigmentosa, pseudoxanthoma elasticum (PXE) and Stargardt disease [54–56]. The latter even provided evidence for a diffusible factor from the RPE sustaining the choroidal structure.

#### **4.2.7 Intraocular Tumors**

Pigmented lesions such as choroidal melanomas, nevi or congenital hypertrophy of the RPE and other intraocular tumors such as hemangiomas, hamartomas or osteomas have also been studied using OCT. OCT has enabled improved delineation of tumor borders, with detailed qualitative and quantitative analysis, as well as characterization of reflectivity properties (Fig. 4.11) [57, 58].

#### **4.2.8 Inflammatory Diseases, Intermediate and Posterior Uveitis**

Intermediate and posterior uveitis may be associated with the development of macular edema, vascular changes in the retina or the choroid, and/or inflammatory lesions. The detection of all these lesions has been enhanced with the use of OCT scans, while providing valuable and reliable information for the challenging follow-up of these patients [59].

#### **4.2.9 Vitreoretinal Interface**

Detection and detailed evaluation of macular holes, epiretinal membranes and tractional changes have been facilitated by OCT images. The International Vitreomacular Traction Study

**Fig. 4.10** Sorsby Fundus Dystrophy. Color fundus photograph (left), fluorescein angiography (middle), and SD-OCT images (right) demonstrate macular CNV (**a**–**o**) and juxtapapillary polypoidal choroidal vasculopathy (**p**–**s**) in patients with SFD as well as response to treatment with bevacizumab. All subjects show regression of retinal edema after therapy (lower OCT images). The dotted line marks the position of the respective SD-OCT line scan. Source: Gliem M et al.: Sorsby Fundus Dystrophy: Novel Mutations, Novel Phenotypic Characteristics, and Treatment Outcomes. *Invest Ophthalmol Vis Sci*. 2015. Reprinted with permission. This figure is not covered by the CC BY license

**Fig. 4.11** Choroidal osteoma. The color fundus photograph reveals the amelanotic choroidal osteoma in the macula. It measures 3.6 × 4.1 mm (**a**). Ultrasonography reveals a 0.9 mm thick tumour with posterior shadowing (**b**, arrowhead). On the OCT-image, the tumour is hyporeflective with intrinsic hyperreflective dots (**c**, arrowhead). The posterior edge of the tumour is visible

Group classification provided new definitions for vitreomacular adhesion and vitreomacular traction using OCT images [60]. Both can be classified as broad (area of vitreous attachment >1500 μm) or focal (area of vitreous attachment ≤1500 μm). The presence of perifoveal vitreous detachment associated with posterior cortical vitreous attachment within the central 3 mm may be due to vitreomacular adhesion in the absence of allowing for more accurate tumour thickness measurements. The corresponding (white line) measurement on OCT is 320 mm. Source: Freton A & Finger PT: Spectral domain-optical coherence tomography analysis of choroidal osteoma. *Br J Ophthalmol*. 2011. Reprinted with permission. This figure is not covered by the CC BY license

retinal abnormalities, or vitreomacular traction when associated with intraretinal cysts, subretinal fluid, or flattening of the foveal contour, but in the absence of full-thickness interruption of all retinal layers [60].

Full-thickness macular holes are defects of all retinal layers from the inner limiting membrane (ILM) to the photoreceptors with preservation of the RPE located at the level of the fovea. Macular holes are classified as small (≤250 μm), medium (250–400 μm), or large (>400 μm) based on the size (minimum hole width). Visual outcomes of these cases are related to the size of the hole. A lamellar hole is a partial defect with preservation of the photoreceptors. Macular pseudoholes present as changes in the foveal contour that mimic a lamellar macular hole, without retinal layer defects [60].

Finally, OCT scans allow for visualization and detection of epiretinal membranes as hyperreflective tissue attached to the inner surface of the retina. The location, extension and the evaluation of the outer retinal layers as well as a better planning of the surgical technique is often facilitated by OCT imaging.

#### **4.3 Pitfalls of OCT in Retinal Diagnostics**

#### **4.3.1 Acquisition Protocol**

Recent advances have led to reliable and fast acquisition of OCT images, providing a broad application in both clinical and experimental settings [2]. The varying indications for use of OCT technology has raised questions concerning the location, density and interpretation of the scans. While the flexibility of scanning is important in order to optimize the scan protocols, an increasing amount of data requires exponentially larger storage drives and fast broadband network systems [61].

Using adapted imaging protocols dependent on specific diseases may address these challenges. In exudative macular diseases (e.g., NV), volume scans have the advantage of dividing the central retina into equal proportions in order to identify fluid also outside the foveal center. In other diseases such as vitreoretinal interface disorders, the fovea and the optic nerve head are the most important areas, while more eccentrically located retinal zones are less relevant. Radial scans are useful in these cases, giving a more precise representation of a circumscribed retinal area where the scans intersect (i.e., the fovea), compared to eccentric areas (Fig. 4.12) [62]. In unclear cases, planning a more detailed scan protocol in the area of interest should be typically considered.

#### **4.3.2 Acquisition Technique**

For acquisition of high-quality OCT images, parameters such as alignment of the camera, focus, detector sensitivity and signal strength are important prerequisites. Automatic registration and matching of OCT images of the same retinal location is an essential tool for monitoring subtle changes over time. The same focus should be kept between different imaging sessions and tilting of the head should be avoided during image acquisition in order to minimize artifacts or inaccuracies [63]. Incorrect settings should be identified before a clinician interprets the results. In order to avoid misinterpretations, operators should be adequately trained and instructed to check the quality and completeness of the data directly after the recordings, as an immediate reacquisition might be possible with the subject still in front of the device [61].

Up to now, no common industry standard has been established for OCT imaging. In addition, device-dependent differences may also occur (e.g. in the appearance of retinal thickness), as OCT B-scans are usually displayed as stretched images in the vertical direction. Accordingly, for better comparability, the same patient should be examined with the same device platform over time. Even by simple software updates, the algorithms and definitions of the automatic segmentation lines may change and, therefore, the comparability of subsequent recordings and their evaluation may be limited [61].

#### **4.3.3 Interpretation**

For an adequate evaluation of modern OCT imaging, the collected data should be reviewed on the reviewer software instead of using printed scans or PDF files, as it offers the possibility to evaluate all collected B-scans individually. Evaluation of a single OCT-scan may not be sufficient for differential diagnosis or to determine disease activity and treatment effects (Fig. 4.13) [61].

Correct interpretation of OCT findings is a prerequisite for treatment decisions. Precise knowledge of retinal and macular diseases is mandatory, as focusing on the relevant findings may be

**Fig. 4.12** Pitfalls in OCT acquisition. Application of the 19-line volume scan (left panels) versus a star scan protocol (right panels) in an eye with vitreomacular traction and full-thickness macular hole. Note that the three central B-scans in the volume scan fail to detect the relevant pathologic findings. Source: Schmitz-Valckenberg S et al.: Pitfalls in retinal OCT imaging. *Ophthalmology @ Point of Care*. 2017. Reprinted with permission

**Fig. 4.13** Pitfalls in OCT interpretation I. Choroidal neovascularization (CNV) under antivascular endothelial growth factor therapy. Evaluation of only the central B-scan for activity of the CNV lesion (**a**) would fail adequate interpretation of the disease status, because inferior to the fovea, there is intraretinal fluid indicating disease activity (**b**). Source: Schmitz-Valckenberg S et al.: Pitfalls in retinal OCT imaging. *Ophthalmology @ Point of Care*. 2017. Reprinted with permission

challenging with an increasing number of B-scans. In addition, the exact assignment of OCT layers to anatomical structures (applied after the consensus of the International Nomenclature for Optical Coherence Tomography Panel) as well as pattern recognition plays an important role in image evaluation, as the differential diagnosis or treatment indication can be supported by recognition of characteristic OCT findings. For example, domeshaped elevations of the RPE in the presence of soft drusen indicate exudative AMD; whereas marked thickening of the choroid in OCT images without soft drusen would point to CSCR. Further examples of challenging OCT interpretations are demonstrated in Fig. 4.14.

Projection artefacts may derive from hyperreflective changes in the vitreous (e.g., floaters) or on the surface of the retina (e.g., epiretinal membranes) that may lead to suppression of structures in deeper retinal layers. In such scenarios, comparison with other imaging modalities or ophthalmoscopy is helpful. When applying automatic analysis algorithms, operators and clinicians should evaluate the segmentation of retinal boundaries in each B-scan and, if necessary, manually correct them [61].

The quantitative evaluation of OCT findings requires precise definition of individual parameters. To date, there is no industry standard or consensus, with different terms being used in

**Fig. 4.14** Pitfalls in OCT interpretation II. In contrast to intraretinal cystoid lesions secondary to CNV, macular telangiectasia type 2 revealed mimicking alterations within the inner retina, due to degenerative changes (**a**). Note that there is no thickening of the retina. Further misinterpretation concerning indication to anti-VEGF treatment might derive from

degenerative outer retinal tubulations (arrows) as visualized by an OCT B-scan and by an OCT en face image (**b**), epiretinal membrane (**c**), or choroidal folds of different origins (e.g. orbital tumor, (**d**)). Source: Schmitz-Valckenberg S et al.: Pitfalls in retinal OCT imaging. *Ophthalmology @ Point of Care*. 2017. Reprinted with permission

parallel. The correct geometric location and segmentation of relevant anatomic landmarks is crucial for meaningful and correct quantitative analysis. But the definition of landmarks such as the fovea may be challenging in the presence of pathologic changes. The OCT interpretation is usually based on the 1:1 pixel presentation mode, in which the image information in the lateral compared to the anteroposterior dimension is compressed depending on the device. It was shown that the quantification of areas or distances in the 1:1 pixel presentation mode is prone to overestimation of values in the anteroposterior dimension. Therefore, measurements should be performed in the 1:1 μm presentation mode [64, 65]. Inaccuracies of measurements within B-scans may further occur if the retinal layers are not orthogonal to the laser beam. To determine correct values, measurements should always be performed parallel to the beam path. Furthermore, the method of scaling must be considered when measured values are specified in the metric system.

Several structures are usually not distinguishable, because of the lack of reflectivity in the perpendicular laser beam. Changing the direction of the laser beam in relation to the retina, e.g. by tilting the head of the subjects, some of these structures, like the Henle fiber layer, may become visible [66]. However, other alterations, like retinal hemorrhages, stay concealed as the imaging light shows little or no interference in the area of bleeding (Fig. 4.15).

In conclusion, while OCT alone should not be the only basis for diagnostic and treatment recommendations, its application in daily clinical practice and for research purposes has become invaluable. It should be regarded as an additional diagnostic procedure in a multimodal imaging assessment including fundoscopy, fluorescence angiography, in association with a careful anamnesis and assessment of patient's complaints. The latter is known to differ frequently from the severity of OCT findings. In these cases, it is more than mandatory to perform complimentary imaging and diagnostic procedures [61].

**Fig. 4.15** Pitfalls in OCT interpretation III. Hemorrhages (arrow on fundus photograph) are frequently not visualized by near-infrared reflection and optical coherence tomography imaging due to the wavelength used. Source: Schmitz-Valckenberg S et al.: Pitfalls in retinal OCT imaging. *Ophthalmology @ Point of Care*. 2017. Reprinted with permission

#### **4.4 Summary and Outlook**

Many posterior segment ocular diseases involve both the retina and the choroid as the RPE, Bruch's membrane and the choroid represent a coadjutant functional complex [67]. This may be particularly important in retinal disorders such as AMD, the most common cause of legal blindness in industrialized countries, characterized by abnormal extracellular material deposition either below or above the retinal pigment epithelial layer [68, 69]. Even single gene retinal dystrophies like *ABCA4*-related retinopathy, that primarily affects the RPE by excessive accumulation of lipofuscin, or pseudoxanthoma elasticum (PXE), which leads to a calcification of the Bruch's membrane, have been described to reveal choroidal alterations [54, 55, 70, 71]. The combination of shorter and longer wavelength light sources within one gadget might combine the advantages attributed to SD-OCT (i.e., better resolution for the visualization of retinal layers) and SS-OCT (i.e., visualization of the choroid). This might allow for optimum visualization of intraretinal as well as subretinal structures without temporal or spatial separation. In 2017, the first OCT device using different wavelengths of laser light sources was built at the Technical University of Biel and University of Basel, Switzerland. First clinical data and value with the device remain to be demonstrated as well as the possible commercial feasibility.

Since the beginning, continuous improvements have been made to scan rates as well as axial and lateral resolution. Commercial OCT systems achieve scan rates up to 250,000 Hz and an axial resolution of under 7 μm [1, 6]. Faster imaging improves patient comfort and reduces acquisition time, increasing the likelihood of better scan quality. It also enables volumetric as well as 3-dimensional analysis of various pathological features, including choroidal neovascularization and intraretinal fluid. The latter might help in monitoring disease progression and treatment effects [72]. Furthermore, higher quality by improved resolution will further enhance automated segmentation and analysis, a field of rising importance in the view of growing applications of artificial intelligence and machine learning in ophthalmology [73–75].

During the last decades, OCT technology has revolutionized the retina subspecialty field. OCT imaging now plays a pivotal role in understanding, diagnosing, and monitoring natural history and treatment effects in AMD, diabetic retinopathy, retinal vascular diseases, CSCR, high myopia and many other retinal and choroidal conditions. High resolution and high-quality multimodal assessment in combination with continuous innovations of the OCT imaging modality are aiming to further improve the clinical assessment of retinal and choroidal diseases.

#### **References**


vascular age-related macular degeneration. Investig Opthalmol Vis Sci. 2016;57:OCT14. https://doi. org/10.1167/iovs.16-19969.


growth factor agents. Surv Ophthalmol. 2015;60:204– 15. https://doi.org/10.1016/j.survophthal.2014.10.002.


Am J Ophthalmol. 2016;170:58–67. https://doi. org/10.1016/j.ajo.2016.07.023.


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Ophthalmic Diagnostic Imaging: Glaucoma**

Robert N. Weinreb, Christopher Bowd, Sasan Moghimi, Ali Tafreshi, Sebastian Rausch, and Linda M. Zangwill

#### **5.1 Introduction**

The detection and monitoring of glaucoma customarily involves several processes that include diagnostic modalities such as subjective evaluation of the optic nerve head (ONH), visual field testing, and intraocular pressure measurements. These traditional methods of assessing glaucoma have several key limitations that dictate the need for supplementary approaches. The diagnostic assessment of the ONH in glaucoma by ophthalmoscopic examination or serial stereoscopic photographs is highly dependent on observer skills, inducing high inter-observer and intra-observer variation that affects its utility [1–3]. Visual field analysis through automated perimetry is a widely used technique that is considered an established clinical endpoint and is arguably the goldstandard for evaluation of glaucoma and for monitoring of disease progression. Although it is sensitive and specific at detecting glaucomatous functional loss, automated perimetry has several significant limitations [4–7]. The test requires the subjective input of the tested individual, making it prone to high short- and long-term fluctuation. The high fluctuation of the test, induced by its subjective nature, necessitates multiple tests to improve the reliability of the technique, delaying the recognition of glaucomatous damage [8]. Several studies have shown that detectable glaucomatous field abnormalities may be preceded by structural changes of the ONH and nerve fiber layer [9–18] Furthermore, intraocular pressure (IOP) is the major identified risk factor for the development of glaucomatous damage and is the only modifiable risk factor to date. Although lowering IOP serves to impede the progression of retinal ganglion cell degenerative change [19– 21], the high inter-individual variability and the diurnal variation in the intraocular pressure have limited the use of this parameter for the detection of the disease. Moreover, intraocular pressure values do not indicate whether damage has occurred, or to what extent. In addition, elevated IOP (i.e. ocular hypertension) does not necessarily result in glaucomatous damage [14].

While detection of glaucomatous structural damage to the eye during the earliest stage and precise assessment of this change are critical aspects of managing the disease, both feats are challenging. Glaucomatous damage is largely irreversible and, therefore, eyes with structural damage must be identified as early and as

R. N. Weinreb · C. Bowd · S. Moghimi · L. M. Zangwill Hamilton Glaucoma Center, Shiley Eye Institute, and The Viterbi Family Department of Ophthalmology, University of California San Diego, La Jolla, CA, USA

A. Tafreshi (\*) · S. Rausch Heidelberg Engineering GmbH, Heidelberg, Germany

accurately as possible because they are at risk for continued injury. It has been suggested that the earlier glaucoma is detected and treated, the greater the likelihood that medical or surgical intervention will delay or prevent the progression of glaucomatous neuropathy and subsequent functional impairment [22–24]. Furthermore, because glaucoma progresses slowly, it is important to detect real change due to disease that is beyond normal age-related loss and short-term and long-term fluctuations. This assumption underscores the need for an accurate and reproducible quantitative evaluation of the eye.

During the past three decades, there has been significant development and implementation of several imaging technologies designed to objectively and quantitatively detect glaucomatous neuropathy at early stages of disease. Beyond early detection, quantitative and objective imaging devices offer a more sensitive way to detect glaucomatous progression when compared with clinical qualitative assessments. One of the earliest imaging devices introduced to the ophthalmic field was a confocal scanning laser ophthalmoscope (cSLO), a device developed to assess optic disc topography in the late 1980s (Laser Tomographic Scanner, Heidelberg Instruments). With reductions in cost and the advent of improved hardware, the introduction of the first practical commercial cSLO device was introduced in 1991 [Heidelberg Retina Tomograph (HRT); Heidelberg Engineering, Heidelberg, Germany].

Imaging instruments provide objective, quantitative measures of neuroretinal rim thinning, RNFL atrophy, and excavation of the optic cup, and are increasingly utilized in the clinical management of glaucoma patients. This is due in part to the availability of summary information that can easily be used in clinical management decisions. For example, most instruments now include a reference database for making statistical indications of whether a patient measurement is "Within Normal Limits," "Borderline," or "Outside Normal Limits." In addition, each device provides a measure of image quality so that the clinician can determine whether the image is of sufficient quality to be utilized in clinical management decisions. With continuous developments in imaging technology like spectral domain OCT (SD-OCT) and advancements in research applications of such technologies, the value of these devices in glaucoma management is likely to continue growing.

Although in vivo imaging with cSLO, timedomain OCT (TD-OCT), and spectral-domain OCT (SD-OCT) have been commercially available for the management of glaucoma for over 10 years, interpretation and utilization of such results remains a challenge. However, clinical research continues to significantly advance the relevance and utility of diagnostic imaging devices by enabling visualization of highresolution and detailed images and to provide sophisticated data analysis strategies. Such improvements increase efficiency while providing precise, accurate analysis of the retinal data produced by each device [25–27]. This chapter will review diagnostic imaging techniques that have advanced and continue to advance diagnosis and management of glaucoma.

#### **5.2 The Heidelberg Retina Tomograph: Confocal Scanning Laser Ophthalmoscope (cSLO)**

The first cSLO device developed as a diagnostic aid for glaucoma, the Heidelberg Retina Tomograph (HRT), utilizes confocal optics to obtain multiple measures of retinal height at consecutive focal planes to provide a topographic map that extends from the retinal anterior surface down to the lamina cribrosa. The HRT platform includes comprehensive software that facilitates image acquisition, storage, retrieval, and analysis. After manually delineating the optic disc margin by placing a contour line along the inner edge of the scleral ring at the baseline exam, stereometric parameters are provided to describe the retinal topography of each image. The contour line is automatically transferred to all follow-up examinations. Many of the stereometric parameters are calculated based on a standard reference plane set 50 μm posterior to the average contour line height (i.e., retinal height) at a 5° sector along the temporal rim; an area thought to be least effected by glaucomatous progression and therefore thought to change minimally over time. Stereometric parameters include: disc area (area within contour), rim area (area within contour and above reference plane), cup area (area within contour and below reference plane), rim volume, cup volume, mean cup depth, mean height of contour, an indirect measure of retinal nerve fiber layer (RNFL) thickness, and cup shape (Fig. 5.1).

#### **5.2.1 Clinical Development**

Several key clinical research studies contributed to the advancement and development of the cSLO technology for assessment of the optic nerve head. In 1993, Weinreb et al. recommended acquiring multiple images at each visit, and showed that with three images one could obtain excellent reproducibility within a short time for a reasonable cost [28]. This study led to the implementation of software that automatically acquires three sets of three-dimensional images each time. If the quality of images in at least one of the series is not good enough (for reasons such as fixation loss), the acquisition is automatically continued until three useful series are obtained. The same group also showed the SLO imaging is highly reproducible in patients with [29] or without undilated pupils [30]. There was no significant difference between the standard deviation of a single height measurement in normal and glaucomatous eyes. No correlation was found between standard deviation of the measurements and pupil size or age of the subject [29]. Zangwill et al. demonstrated a moderate agreement between clinicians and a HRT in estimating cup/disc ratios with the highest disagreements in discs with gradual slopes and pallor. New quantitative criteria then were established for characterizing a disc as glaucomatous using HRT [31]. A quantitative method was then developed for analysis of the topographic relationship between structural and functional damage in patients with glaucoma [32].

In 1998, Wollstein and associates introduced the Moorfields Regression Analysis (MRA) to the HRT. The MRA classification technique compares global and local rim area measurements (reference plane dependent) to a normative database taking into account disc area and age [33]. A few years later, a machine-learning based diagnostic classifier called the Glaucoma Probability Score (GPS) was introduced and implemented in the third generation of the HRT (HRT III). The GPS was one of the earliest applications of machine learning in the ophthalmic field, using a geometric model to describe the shape of the optic disc/parapapillary retina (globally and locally) based on five parameters (cup size, cup depth, rim steepness, horizontal retinal nerve fiber layer curvature, and vertical retina nerve fiber layer curvature) [34]. These parameters are then interpreted by a relevance vector machine classifier [35] and the resulting output describes the probability that the eye is glaucomatous (based on fit to training data from healthy and glaucoma eyes). This technique does not depend on an operator drawn contour line or a reference plane and is therefore operator independent (Fig. 5.2). Results from both classification techniques are reported as 'within normal limits', 'borderline' or 'outside normal limits' globally, and for each of six disc sectors relative to the normative data.

In 2000, Chauhan et al. [36] introduced Topographic Change Analysis (TCA), a progression analysis tool that became a gold-standard in the assessment of glaucomatous ONH changes. The TCA quantifies the change in the topography of the ONH using the first image as the baseline and the subsequent images as follow-up examinations. The TCA does not require a defined contour line to determine areas of significant change as it assesses the height of the optic disc and retinal surface at each follow-up measurement and compares these to the baseline measurement. The images are further analyzed using an array of 4 × 4 pixels, called super-pixels. Superpixels allow for pooling over a larger area and yield more repeated measures for analysis. In steep areas like the edge of the cup, the variability is greater than that in flat areas. The topographic measurement of a superpixel (4 × 4 pixels) and an analysis of variance model for each superpixel is then calculated.

**Fig. 5.1** After manually delineating the optic disc margin by placing a contour line along the inner edge of the scleral ring at the baseline exam, stereometric parameters are provided to describe the retinal topography of each image. Stereometric parameters include: disc area (area within contour), rim area (area within contour and above reference plane), cup area (area within contour and below reference plane), rim volume, cup volume, mean cup depth, mean height of contour, an indirect measure of retinal nerve fiber layer (RNFL) thickness, and cup shape



**Fig. 5.2** The Glaucoma Probability Score (GPS) uses a geometric model to describe the shape of the optic disc/ parapapillary retina (globally and locally) based on five parameters. The resulting output describes the probability that the eye is glaucomatous. This technique does not depend on an operator drawn contour line or a reference plane and is therefore operator independent. Results are reported as 'within normal limits', 'borderline' or 'outside normal limits' globally, and for each of six disc sectors relative to the normative data

#### **5.2.2 Clinical Validation**

The combination of these HRT (cSLO) diagnostic tools became the gold standard for imaging and monitoring of the optic nerve head in glaucoma, in the early- to mid-2000s. Several large and seminal studies contributed to the validation of the diagnostic parameters offered by the HRT.

Bowd et al. validated the clinical utility of this tool by showing that TCA parameters can discriminate between progressing glaucoma eyes and longitudinally observed healthy eyes, suggestive of the ability of the HRT with TCA to detect early ONH changes due to glaucoma [37]. The Ocular Hypertension Treatment Study (OHTS), sponsored by the National Institutes of Health/ National Eye Institute, is a multicenter randomized clinical trial designed to evaluate the safety and efficacy of topical glaucoma medication in delaying or preventing the onset of glaucomatous VF loss or optic nerve deterioration in participants with ocular hypertension [14, 38]. In OHTS, the presence of clinically significant disc changes during follow-up was determined by evaluation of serial stereoscopic optic disc photographs.

The Confocal Scanning Laser Ophthalmoscopy Ancillary Study to the Ocular Hypertension Treatment Study was the first multicenter clinical trial to use cSLO imaging to monitor changes in the optic disc [39]. This study evaluated the effectiveness of various HRT parameters in detecting the presence and progression of glaucomatous optic disc damage and determined whether optic disc topographic measurements are an accurate predictor of visual field loss. The 451 participants in the OHTS CSLO ancillary study were recruited from seven of the 22 OHTS centers and the HRT examinations were obtained annually after pupillary dilation at the time of scheduled OHTS fundus examination and optic disc photography.

The baseline data from OHTS indicated that cSLO measurements correlated well with expert evaluation of stereoscopic photography and that differences in topographic optic disc parameters between African Americans with ocular hypertension and other racial groups are largely explained by the larger optic disc area in the African Americans [38–40]. This latter result highlighted the need to consider race and optic disc size when evaluating the appearance of the optic disc in glaucoma. Therefore, several of the parameters implemented in the HRT are reported and compared to a race specific normative database.

#### **5.2.3 Surrogate Endpoints and Progression**

The HRT neuroretinal rim parameters have been shown to be predictive of functional loss and to serve as suitable surrogate endpoints in glaucoma clinical trials [15, 41–44]. The OHTS data also showed that baseline topographic optic disc measurements can predict the onset of primary open angle glaucoma in patients with ocular hypertension [15]. OHTS results suggest that baseline GPS, MRA, and stereometric parameters alone or when combined with baseline clinical and demographic factors can be used to predict the development of POAG endpoints in OHTS participants and are as effective as stereophotographs for estimating the risk of developing POAG in ocular hypertensive subjects [15, 41]. In 2008, Alencar et al. assessed whether baseline GPS results are predictive of progression in patients suspected of having glaucoma. Their study results showed that baseline GPS are in fact predictive and that they perform as well as subjective but expert assessment of the optic disc. They further suggested that the HRT GPS could potentially replace stereophotographs as a tool for estimating the likelihood of conversion to glaucoma [42]. In 2009, Chauhan et al. concluded that patients that presented with glaucomatous visual field progression were up to three times more likely to have prior disc changes as measured on TCA [43]. Medeiros et al., showed that progressive rim area loss, as defined by the HRT parameters, was highly predictive of the development of visual field loss in glaucoma and explained a significant proportion of the effect of treatment on the clinically relevant outcome [44]. They suggested that rim area measurements may be suitable surrogate endpoints in glaucoma clinical trials [44].

The Diagnostic Innovations in Glaucoma Study (DIGS) and the African Descent and Glaucoma Evaluation Study (ADAGES) are large multi-center ongoing studies that include normal subjects, patients with glaucoma, and glaucoma suspects, who are semi-annually evaluated clinically and with several functional and optical imaging tests including HRT and SD-OCT. The 3-site collaboration includes the Hamilton Glaucoma Center at the Veterbi Family Department of Ophthalmology (RN Weinreb), University of California, San Diego (UCSD) (Data Coordinating Center) (L Zangwill), Columbia University and the New York Eye and Ear Infirmary (J Liebmann), and the Department of Ophthalmology, University of Alabama, Birmingham (C Girkin and M Fazio) [45, 46].

Results from DIGS confirmed earlier reports of the comparability of stereophotograph based cup-to-disc ratio measurements and HRT measures in predictive models [47]. The same group also concluded that the presence of optic disc damage on stereophotographs is highly predictive of future development of functional loss [48]. They later showed that the rate of rim area loss measured using HRT is approximately 5 times faster in eyes in which POAG developed compared with eyes in which it did not. The results of this study, suggest that measuring the rate of structural ONH change using cSLO-based parameters can provide important information for the clinical management of ocular hypertensive patients [49].

The ADAGES group characterized the rate and pattern of age-related and glaucomatous neuroretinal rim area changes in subjects of African and European descent, using HRT parameters. They showed that compared with healthy eyes, the mean rate of global rim area loss was 3.7 times faster and the mean rate of global percentage rim area loss was 5.4 times faster in progressing glaucoma eyes [50].

#### **5.2.4 Summary**

Heidelberg Retina Tomograph cSLO technology introduced the ability to objectively quantify various diagnostic parameters for the assessment and management of glaucoma. The technology's diagnostic parameters offer clinicians with an objective and precise method to aid their decision in the diagnosis and management of the disease, while also serving as potential surrogate endpoints in clinical trials. A more recent imaging technology, spectral domain optical coherence tomography (SD-OCT), which also enables to objectively image and assess the optic nerve head as well as the retinal nerve fiber layer and macula, has been established as the most current commonly used diagnostic imaging aid for glaucoma.

#### **5.3 SPECTRALIS SD-OCT**

With the introduction of SD-OCT, it has become possible to image ocular structures in three dimensions with high axial resolution, fast scan rates, and high contrast. These advances have improved visualization of small details and provided a platform for precise analytics. The benefits of SD-OCT technology, combined with eye-tracking algorithms that enable precise scan registration from session to session allow for reliable removal of errors induced by eye movements. The resulting scans offer detailed visualization of retinal structures and provide accurate segmentation of the anatomical boundaries used in advanced analytics.

The SPECTRALIS SD-OCT (Heidelberg Engineering GmbH, Heidelberg, Germany) incorporates a real-time eye tracking system (TruTrack™) that couples cSLO and SD-OCT scanners to adjust for eye movements and to ensure that the same precise location of the retina is scanned time after time (Fig. 5.3) reducing variability across longitudinal measurements used for monitoring disease progression. This method also allows B-Scans to be re-sampled in the same location to improve the signal-to-noise ratio (SNR), a technique called Automatic Realtime Tracking (ART).

The standard SPECTRALIS SD-OCT glaucoma software includes RNFL thickness measurements derived from a 12° circle scan,

**Fig. 5.3** A core technology of the SPECTRALIS is TruTrack (**a**), a dual-beam tracking system which provides very important clinical benefits such as retinal recognition, follow-up scanning, precise co-localization of

fundus images with depth-resolved information in OCTscans. This system also enables the Automatic Real-time Tracking (ART) technique for image averaging to improve image quality by reducing noise (**b**)

manually centered by the operator on the optic disc. The RNFL thickness analysis then provides sectorial and global measurements that are compared with a reference database of healthy controls. The RNFL thickness values for a scan are classified using a 6-sector analysis, and sectors can be flagged green ("Within Normal Limits"), yellow ("Borderline"), and red ("Outside Normal Limits"). RNFL parameters of the SPECTRALIS have been evaluated relative to other commercially available SD-OCT devices and shown to be as specific, sensitive, and repeatable [51, 52].

While several SD-OCT devices are commercially available, Heidelberg Engineering has included several unique features and functionalities with the intent to account for relevant factors that may influence the resulting diagnostic parameters. The Glaucoma Module Premium Edition (GMPE) offers new scan patterns and an updated reference database that was acquired using these new features and functionalities, designed to enhance and support the clinical assessment of glaucoma by accounting for anatomic variability of each eye to improve classifications.

#### **5.3.1 Clinical Assessment of Optic Nerve Head Parameters**

The retinal ganglion cell (RGC) axons comprise the RNFL with axons exiting the eye via the optic nerve head. The health of the optic nerve head can be assessed based on the amount of neuroretinal rim tissue present. Because the axons exiting the eye make up a significant portion of the neuroretinal rim, its loss is associated with RGC and axonal degeneration, both of which are characteristic of glaucomatous damage. The optic disc constitutes the clinically visible surface of the neural and connective tissues of the ONH. The health of the neuroretinal rim is defined using two landmarks: the optic disc margin and the optic cup margin. These two landmarks define the outer edge (clinical disc margin) and inner edge (optic cup margin) of the neuroretinal rim. The amount of rim tissue then is estimated within the apparent plane of the disc margin as either the ratio of the size of the cup to the size of the disc [53] or the rim area [54]. An eye that exhibits a large CDR is indicative of potential glaucomatous damage as axonal loss results in expansion of the optic disc cup. However, both the optic disc and cup margin are defined subjectively and are difficult to delineate consistently [55]. Therefore, the resultant CDR and clinical neuroretinal rim quantification are variable. Furthermore, these concepts are applied whether the examination is performed with direct ophthalmoscopy, slit-lamp biomicroscopy, optic disc photography, or a number of quantitative imaging methods.

The more recent SPECTRALIS SD-OCT with OCT2 Module technology provides a new level of high-resolution imaging of ONH anatomic features that are affected in glaucoma. Clinicians can now visualize optic nerve structures such as the anterior and posterior lamina cribrosa surfaces, Bruch's membrane-retinal pigment epithelium complex and its termination within the ONH border, tissue of Elschnig, and the scleral canal opening [55–65]. Studies show that accurate colocalization of fundus photographs to SD-OCT image data allows clinicians to identify structures that correspond to common clinical landmarks such as the optic disc margin [56, 66]. SPECTRALIS imaging and measurement of the ONH landmarks support interpretation of fundus images and can objectively assist in clinical assessment of the nerve in three dimensions.

#### **5.3.2 Bruch's Membrane Opening (BMO) in SD-OCT-Based Neuroretinal Rim Measurements**

More recent SD-OCT imaging studies have challenged concepts of the clinical disc margin and rim quantification from both anatomic [55] and geometric [66–68] perspectives. The termination of Bruch's membrane at the ONH marks the opening through which retinal ganglion cell axons exit the eye to form the choroidal and scleral portions of the neural canal. Because axons cannot pass through an intact Bruch's membrane to exit the eye, this anatomic opening, termed Bruch's membrane opening (BMO), is a true anatomical border of the neural tissue. Thus, the BMO is a stable anatomical landmark from which neuroretinal rim measurements can be made (Fig. 5.4) [69].

However, Bruch's membrane is a very thin anatomical structure that is about 2–5 μm thick, and it appears as a hyper-reflective layer on SD-OCT that is approximately as thick as the RPE, around ~14–16 μm [70]. High-resolution and excellent signal-to-noise ratio (SNR) SD-OCT images are needed in order to detect the reflectance of bounding surfaces. It has been shown that the SPECTRALIS SD-OCT can consistently identify the BMO and that these images correlate to ground-truth ONH histology, as

**Fig. 5.4** The BMO represents a stable structure through which all axons exit the eye. Because blood vessels and axons cannot pass through Bruch's membrane, it is considered an appropriate anatomical boundary of the optic disc

**Fig. 5.5** (**a**) Electron microscopy of Bruch's membrane shows the basal membrane of the choriocapillaris, collagen layers, elastic layer, and basal membrane of the

shown in Fig. 5.5 [56, 66, 71]. Finally, the stability of BMO under a variety of conditions provides another rationale for its usefulness as a landmark over time [69]. BMO also is unaltered by large changes in IOP induced by glaucoma surgery, as the two-dimensional plane that best fits BMO is axially stable with surgical reduction of IOP [72].

The orientation of the neuroretinal rim relative to the BMO varies around the ONH because axons can exit the eye along varying paths, ranging from parallel to the visual axis to perpendicular to it [73]. In order to correctly account for these variations, studies have demonstrated that the minimum distance from BMO to the internal limiting membrane represents the most geometrically accurate measurement of neuroretinal rim width [66–68]. This neuroretinal rim measurement has been termed Bruch's Membrane Opening—Minimum Rim Width (BMO-MRW). Studies have demonstrated the usefulness of BMO-MRW in the detection of progressive ONH change in experimental animal models of glaucoma and in human eyes [69, 74]. The BMO-MRW is diagnostically specific and sensitive for detecting glaucoma, and it enhances the clinical

RPE. Panels (**b**) and (**c**) show ONH histology of Bruch's membrane, along with its corresponding appearance on SD-OCT. Images courtesy Christian Mardin, MD

assessment of the optic disc [75]. The BMO-MRW parameter provides better diagnostic performance than the original gold-standard HRT analyses [76]. Ultimately, the BMO-MRW and RNFLT measurements have been shown to complement each other in the assessment and monitoring of glaucoma [77].

The SPECTRALIS GMPE optic nerve headradial and circle (ONH-RC) scan acquires 24 radial and three concentric circle scans, with diameters of 3.5, 4.1 and 4.7 mm, centered on the BMO (Fig. 5.6). The radial scans define 48 BMO points that serve as the basis for the BMO-MRW measurements, and the three circumpapillary RNFL scans offer complementary RNFL thickness measurements that capture valuable information away from the optic nerve head. The BMO-MRW measurements of the radial scans and the RNFL measurements of the three circle scans are both adjusted for BMO area and age. The circle scans show comparable diagnostic performance, and in patients with large areas of peripapillary atrophy, the outer scans may offer reliable RNFL measurements when the conventional 3.5 mm scan is confounded by atrophy [78].

**Fig. 5.6** The ONH-RC scan pattern produces 24 line scans and 48 BMO endpoints, shown in (**a**). The three concentric circumpapillary scans have a diameter of 3.5 mm (**b**), 4.1 mm (**c**), and 4.7 mm (**d**). The three scans often offer complementary information and can confirm the presence of focal RNFL wedge defects that broaden away from the ONH (white arrows). Images courtesy Maria Pilar Bambó, MD, PhD

#### **5.3.3 Anatomic Variation: Position of the Fovea Relative to the Center of the ONH**

On average, the fovea is located 7° below the level of the center of the ONH, but the angle can vary from 6° above to 29° below [79]. Although the positions of the fovea and ONH center vary considerably between subjects, the anatomic path of RNFL bundles is governed primarily by these two structures as the bundles approach the ONH and exit the eye [80, 81]. In fundus images, the positions of the fovea and ONH also may vary slightly within the same individual from day to day because of cyclotorsion [82], but the path of RNFL bundles remains constant relative to the fovea-BMO center (FoBMOC) axis as shown in Fig. 5.7 [81].

The FoBMOC axis could significantly vary between the two eyes of one individual. A clinical example of this disparity is shown in Fig. 5.8. If these variations are not taken into account, it may lead to artificially large inter-individual differences in sectoral measurements, reducing the diagnostic precision of the device. Errors in mapping ocular structures to the visual field also may be induced that could contribute to the somewhat poor correlation between measures of structure and function observed in glaucoma [83, 84].

To account for these issues, the SPECTRALIS GMPE offers a proprietary feature called the "Anatomic Positioning System" (APS). Image acquisition using the APS ensures that OCT images are acquired at fixed and known retinal locations relative to certain anatomical landmarks: the center of the fovea and the center of Bruch's membrane opening. The process of defining the APS landmarks is semi-automated within data acquisition, and the operator is able to adjust and confirm the landmarks that the device detects (Fig. 5.9). All subsequent GMPE scans are aligned to the baseline landmarks and are automatically oriented according to the patient's FoBMOC axis. FoBMOC-aligned scans ensure all eyes are anatomically aligned correctly and compared with healthy control eyes regardless of anatomical differences, thereby improving accuracy of the sector analysis (Fig. 5.8).

While the APS landmarks serve as the basis for accurate baseline measures of each individual eye, the previously mentioned TruTrack eyetracking feature accounts for changes in head position and ensures precise placement of follow up scans, both important for maintaining accurate

**Fig. 5.7** Even as the angle of the fovea to the center of the ONH changes due to cyclotorsion or anatomical differences, the arcuate path of the RNFL bundles remains constant to this axis. This can be seen from the tracing of the RNFL fibers from the nerve to the fovea in (**a**). For 11

different eyes, the optic nerve position varies relative to the fovea as shown in (**b**), but alignment of the images in (**c**) illustrates the consistent path of the RNFL bundles. Figure 15 of [81], reprinted with permission

**Fig. 5.8** The range of angles for the fovea to center of ONH axis is large across a population. In these five examples, the fovea is +1.5° in (**a**), −3.3° in (**b**), −8.3° in (**c**), −11.9° in (**d**), and −16.0° in (**e**). Variability can also occur between the two eyes of a single patient, as seen in (**f**) with

an angle of +0.4° OD and −11.0° OS. The three white lines through the ONH in (**f**) represent the 6-sector Garway-Heath classification regions and are shifted according to the angle

comparison to the reference database and when assessing progression [85].

#### **5.3.4 Anatomic Variation: ONH size and Ocular Magnification Impact RNFL Measurements**

In 1996, Schuman et al. reported that a circle diameter of 3.4 mm was the most accurate and reproducible scan size for RNFL thickness measurements [86]. Since then, most OCT instruments and studies have used circular scans with a diameter very close to 3.4 mm, independent of ONH size. However, it is now generally recognized that the optic disc size shows a high interindividual variability, with areas ranging between 0.8 and 6.00 mm2 in normal eyes [87]. Histological studies have shown that RNFL thickness decreases with increasing distance from the optic

**Fig. 5.9** The operator confirms the automated detection of both the fovea (**a**) and the BMO (**b**) positioning within the acquisition window. These anatomic positioning sys-

disc margin [88]. Because of this, RNFL thickness is increased in larger optic discs when measured with a fixed scan size [89–91]. These studies indicate that using a fixed scan diameter

tem (APS) landmarks serve as the placement points for the ONH-RC scan

without adjusting for ONH size introduces inconsistencies and reduces measurement accuracy. Scaling scan diameter according to the ONH margin may provide a more accurate diagnostic RNFL thickness measurement. For this reason, the SPECTRALIS GMPE results account for the size of the ONH (defined as BMO area) when comparing each eye's RNFL thickness and BMO-MRW values with the respective reference database.

Visualization and SD-OCT imaging of the ONH are also affected by ocular magnification [92–96]. Magnification is determined by two factors: axial length and corneal power. In eyes with longer axial length, the actual diameter of a fixedsize OCT scan will be larger when it reaches the retinal plane. The cornea provides approximately two-thirds of the eye's total optical power and plays a significant role in determining the scan size on the retina. Without ocular magnification corrections, OCT scans may not correctly scaled, leading to inconsistent measurements of RNFL thickness. The SPECTRALIS GMPE software allows the user to adjust for magnification during acquisition by entering individual corneal curvature values and by bringing the optic nerve into sharp focus on the cSLO image (Fig. 5.10).

#### **5.3.5 Factors that May Confound Measurements and Classifications: Age, Axial length, and Tilted Discs**

There is a negative association between RNFL thickness and age that may explain the higher rate of glaucoma detection in older individuals [97]. The GMPE reference database accounts for this known decline in RNFL thickness due to age. The values collected for the reference database also show a statistically significant negative correlation between age and BMO-MRW, and the software RBD comparisons take into consideration this relationship. The reference database also shows a negative correlation between BMO area and MRW (larger BMO area associated

**Fig. 5.10** The operator can input the corneal curvature value for each eye so that the fixed-millimeter circle scans are correctly scaled relative to the fundus cSLO image and corresponding ONH size. This figure shows the range that the circle scans cover across the minimum and maximum corneal curvature values. This eye has a measured corneal curvature (CC) of 7.7 mm. Note that with a high CC value, the circle scans are closer to the disc margin than they

should be. With lower CC values, the scans are farther from the disc margin than they should be. Because the RNFL is thinner with increasing distance from the disc margin, the sectorial and global results compared with the reference database are considerably different due to changes in scaling and magnification that are caused by the disparate corneal curvature values

with thinner MRW) and a positive correlation between BMO area and circumpapillary RNFL thickness (larger BMO area associated with thicker cpRNFL measurements), so the reference database comparisons also are scaled by the BMO area.

As axial length and spherical equivalent (SE) refractive error increase, the measured average RNFL thickness decreases [98, 99]. This anatomic relationship can decrease the diagnostic power of a reference database, and most devices have a defined range of SE values of −6 to +6 diopters. Even within the included diopter range, the RNFL thickness profile plotted in the circular TSNIT (temporal-superior-inferior-nasal-temporal) profile can be shifted (Fig. 5.11). Considering these anatomical variations, when the SD-OCT reference database metrics do not agree with visual field tests and/or the clinical examination, it may be helpful to review the SD-OCT B-Scans in order to evaluate the overall appearance of the RNFL to help confirm a positive or negative glaucoma diagnosis. The high-quality SPECTRALIS B-Scans offer the detail to evaluate these structures. It may also be useful to consider macular GCL thickness and BMO-MRW measurements in such cases [100, 101].

The tilted disc phenomenon is another confounding anatomical feature that can affect glaucoma diagnosis. Tilted discs can be congenital or, more commonly, can occur in cases of myopia. In these eyes, the axons exit the eye via the ONH at angles that make assessment of the BMO center and BMO-MRW challenging. In addition, such anatomic anomalies result in arcuate RNFL patterns that are not accounted for in SD-OCT reference databases, making RNFL thickness comparison inconclusive, especially in the temporal region [100, 102]. Also, the distribution of RNFL in eyes with tilted discs is shifted according to the direction of the tilt [103]. Visual field defects also may mimic glaucomatous defects, further confounding the presentation [104]. These characteristics should be considered when applying SD-OCT to the interpretation of RNFL measurements in eyes with tilted discs (Fig. 5.12). A brief inspection of the SPECTRALIS cSLO image and the BMO-MRW radial B-Scans can clearly show BMO-MRW asymmetry and confirm the presence of a tilted disc.

#### **5.3.6 Posterior Pole: Macular and Asymmetry Analyses**

The density of RGCs is highest in the macula, and the ganglion cell layer (GCL) measured by SD-OCT is thickest surrounding the fovea. Loss of these cell bodies has been shown to be indicative of early glaucomatous damage [105]. Studies have also shown that glaucomatous damage results in characteristic patterns of ganglion cell degeneration in the macula. These patterns of loss present as arcuate patterns that correspond to the arcuate RNFL patterns of loss, confirming that the ganglion cell somas and their respective axons are degenerating. Such observations have led to the concept of "macular vulnerability zones" in the temporal inferior and temporal superior sectors [81]. Therefore, when reviewing OCT diagnostic results, it is important to assess anatomically corresponding ONH, RNFL and macula data in order to detect patterns common to glaucomatous damage.

The superior and inferior GCL are symmetric across the fovea in healthy eyes, and assessment of vertical GCL asymmetry across the fovea may be a sensitive method for detection of early glaucomatous damage [106]. Early damage also may be present asymmetrically between eyes [107, 108]. Macular analysis may be especially important for patients classified as "Glaucoma Suspects" by ophthalmic exam, and a study found that glaucoma suspects with macular thinning were more likely to subsequently present with visual field loss [109].

The GMPE posterior pole horizontal (PPoleH) scan offers macular thickness maps of total retinal thickness as well as the individual GCL, inner plexiform layer (IPL), and macular RNFL thicknesses. The Posterior Pole Asymmetry Analysis (PPAA) is derived from PPoleH total retinal thickness, and offers a quantitative and illustrative method to assess the asymmetric loss of macular tissues between the superior and inferior macula as well as between eyes (Fig. 5.13). This

**Fig. 5.11** In cases of increased axial length, the RNFL thickness values in the superior and inferior regions may be shifted towards the nasal or temporal sectors. In (**a**), the normal TSNIT profile has peaks that correspond to the age and BMO-adjusted reference database mean (green shading, solid green line). In a case of myopia (**b**), the increased axial length causes a shift of the superior and inferior

feature allows clinicians to confirm that the patterns of loss observed on the PPAA and the total retinal thickness maps is in agreement with ganglion cell degeneration patterns that are charac-

RNFL thickness peaks. This causes the 6-sector Garway-Heath analysis to flag the sectors "Outside Normal Limits," but visual inspection of the SD-OCT B-Scans shows a normal, healthy RNFL. Images courtesy Daniel Fuller, OD, Michael Gerstner, OD, and Christopher Lievens, OD, MS

teristic of glaucoma (Fig. 5.14). Regardless of which layers are assessed, macular parameters have been shown to add value as diagnostic tools in the detection of glaucomatous damage [110].

**Fig. 5.12** A titled disc can be seen on the fundus photograph (**a**), and the corresponding BMO endpoints can be identified on the radial scans through the ONH (**b**). Similar to cases of increased axial length, there can be shifts in the TSNIT profile of BMO-MRW that cause

false-positive comparisons to the reference database. Visual inspection of the SD-OCT B-Scans reveals a healthy neuroretinal rim. Images courtesy Mohammad Rafieetary, OD

#### **5.3.7 Detection of Glaucomatous Progression with OCT**

Considering that most forms of glaucoma are slowly progressing optic neuropathies characterized by the loss of RGCs and their axons, the detection of glaucomatous progression is a critical aspect of disease management. The identification of structural glaucomatous changes, such as progressive thinning of the RNFL and narrowing of the neuroretinal rim, assists clinicians in confirming the initial diagnosis. More importantly, detecting these changes over time provides clinicians with the information they need to make decisions about further treatment to prevent additional glaucomatous visual impairment.

In order to accurately assess progression, test measurements need to have adequate reproducibility [111, 112]. If test measurements have poor reproducibility or are contaminated by noise, detection of true structural loss is not possible. Pierro et al. evaluated retinal nerve fiber layer thickness (RNFLT) measurements using seven different OCTs (spectral- and time-domain), to assess inter- and intra-operator reproducibility of RNFLT [113]. They determined that the SPECTRALIS SD-OCT showed the best reproducibility among the tested devices. The SPECTRALIS eye tracking and ART image acquisition help provide high-quality OCT B-Scans, which in turn allow for reproducible segmentation of the RNFL. In a separate study of

**Fig. 5.13** The 61-line posterior pole horizontal scan produces a total retinal thickness color map (lower left). The 8 × 8 grid of thickness values serves as the basis for the Posterior Pole Asymmetry Analysis (PPAA) in the lower right panel. The difference in thickness for each corresponding square across the horizontal meridian is shown,

with darker values indicating a larger asymmetric difference in thickness. This analysis can be used to visualize and quantify areas of retinal thickness loss that are characteristic of glaucoma. Images courtesy Shinji Ohkubo, MD, PhD, and Kazuhisa Sugiyama, MD, PhD

clinically relevant reproducibility, Wessel et al. used the SPECTRALIS SD-OCT to measure circumpapillary RNFLT (cpRNFLT) in healthy controls and glaucoma patients over 3 years [114]. Glaucoma subjects were classified as progressing or non-progressing by masked grading of stereoscopic optic disc photographs, and the study showed that the SPECTRALIS measured a cpRNFLT loss of 0.6 μm/year in healthy eyes, a loss of 1.2 μm/year in non-progressing eyes, and a loss of 2.1 μm/year in progressing subjects. Miki et al. showed that the rate of global RNFL loss was more than twice as fast in eyes that developed visual field defects compared with eyes that did not develop a defect [16]. These findings indicate that SD-OCT imaging can be used to detect glaucomatous changes beyond losses from normal aging and beyond the possible noise in repeated measurements.

The ability of a device to measure change over long periods of time also depends on its dynamic range (i.e. the maximum and minimum RNFL thickness measurements that can be reliably made). The lower end of the dynamic range is the "floor" of the measurement, or the minimum layer thickness that is reliably measureable. It is important for an SD-OCT device to have a large dynamic range and a low "floor" value in order to be able to monitor disease progression over time, particularly in advanced glaucoma. The global cpRNFLT dynamic range of the SPECTRALIS has been shown to be larger than comparable devices, while also offering a lower floor [115]. Nevertheless, monitoring advanced glaucoma

**Fig. 5.14** Segmentation of the PPoleH scans between the RNFL and the ganglion cell layer (GCL) provides a GCL thickness color map. In this eye, the same as in Fig. 5.13, there is clear GCL loss in the inferior temporal region

using standard structural and functional testing is extremely difficult for the treating clinician because both standard structural and functional tests that usually guide treatment decisions are of diminished value. Standard structural measures have a limited dynamic range and visual field (VF) test points are more variable in advanced disease [81, 115–120]. However, Belghith et al. have shown that even in very advanced glaucoma, structural loss can be detected in some eyes using standard global structural measures with macular GCIPL identifying the highest proportion of eyes with detectable change, followed by MRW and cpRNFL [121]. In a subsequent study, Bowd et al. concluded that in advanced glaucoma, more macular tissue remains above the measurement floor compared with other measurements, suggesting that macular thickness is the better candidate for detecting progression of such eyes and that progression with SD-OCT measurements is observable in advanced disease [122–125].

#### **5.3.8 Summary**

Because every eye is unique, it is also important for imaging technologies to incorporate analysis tools that allow for the detection of change in a specific eye over time. The SPECTRALIS SD-OCT provides follow-up scans that are coregistered to baseline imaging, which improves repeatability and makes measurement more precise. The combination of high reproducibility, large dynamic range, a low measurement floor and multiple diagnostic parameters such as the BMO-MRW, cpRNFL and macular thickness measurements allows the device to offer a precise and clinically sensitive progression analysis for all stages of the disease.

Several studies have investigated the diagnostic performance of various SD-OCT parameters in a standard ophthalmic environment, and the literature suggests that circumpapillary RNFL thickness, ONH, and macular parameters are specific and sensitive for the detection of glaucoma [107, 126–128]. The SPECTRALIS GMPE software offers multiple measurements of ocular structures that, taken together, can offer confirmatory information and increase confidence in the diagnosis. It has been shown that multiple parameters, when used effectively and with caution, are better than any single parameter for diagnosis and management of glaucoma [129]. More importantly, there is clear utility in looking beyond reports at the individual OCT scans, as these can help clarify and reconcile outlying and aberrant outputs.

#### **5.4 Summary and Outlook**

The development and clinical implementation of cSLO technology (Heidelberg Retina Tomograph) was one of the first imaging technologies in ophthalmology that introduced the ability to objectively quantify various diagnostic parameters for the assessment and management of glaucoma. The technology's diagnostic parameters offered clinicians an objective and precise method to aid their decision in the diagnosis and management of the disease. While the clinical implementation of the HRT served to improve patient care, several clinical research studies employed SLO technology to derive potential surrogate endpoints in clinical trials.

SD-OCT is a more recent technology which also provides objective and quantitative methods to assess the optic nerve head as well as the retinal nerve fiber layer and macula. A current version of this technology, the SPECTRALIS SD-OCT with OCT2 Module, provides a new level of high-resolution imaging of ONH anatomic features that are affected in glaucoma. This device provides clinicians with the ability to visualize optic nerve structures such as the anterior and in some eyes posterior lamina cribrosa surfaces, Bruch's membrane-retinal pigment epithelium complex and its termination within the ONH border, tissue of Elschnig, choroid, and the scleral canal opening.

The ability to visualize the relevant ocular structures in three dimensions in-vivo for the diagnosis and management of glaucoma and to carefully implement multiple objective parameters enables clinicians to make more confident diagnostic decisions. Structural assessment using the imaging technologies discussed in this chapter provides reproducible quantitative measurements of posterior segment ocular structures relevant to the disease.

A newer development in OCT technology, OCT Angiography (OCTA), has sparked interest in evaluating vascular alterations in the retina and ONH for diagnosis, staging, and monitoring in glaucoma. OCTA is an extension of OCT which allows non-invasive visualization of the retinal vasculature by detecting signal changes induced within perfused blood vessels without the use of exogenous dye. In principle, OCTA compares sequential B-Scans acquired at the same location to detect change. As stationary structures would appear static in sequential B-Scans, changes detected by OCTA are largely attributed to erythrocyte movement in the perfused vasculatures.

Because OCTA offers the non-invasive assessment of microvasculature in the peripapillary retina and macula, it is being investigated for its potential to assess ocular hemodynamics in various diseases [130]. Several OCTA studies have shown reduced microcirculation in the peripapillary retina and the superficial macula of openangle glaucoma eyes, with a moderate relationship between microvasculature and function [131–133].

An early OCTA study suggested that the OCTA vessel density parameter may identify glaucomatous damage before focal visual field defects are detectable [132]. Another study showed that macular vessel density results correlate with central 10-2 visual field defects [133]. The same group also concluded that macular and peripapillary vascular density measurements detected changes in retinal microvasculature before visual field damage in the unaffected eyes of glaucoma patients presenting with unilateral glaucoma [134]. Moghimi et al. showed that lower baseline OCTA parameters were associated with a faster rate of RNFL progression in mild to moderate glaucoma over a mean follow-up of 27 months, suggesting that decreased vessel density may be a risk indicator for progression [135].

It remains controversial whether OCTA measurements have a higher diagnostic performance for glaucoma detection compared to conventional OCT measurements such as RNFL thickness, neuroretinal rim width, and macular ganglion cell and inner plexiform layer thickness. Chen and colleagues demonstrated that OCTA-measured vessel density and circumpapillary RNFL thickness measurements had comparable diagnostic performance for the detection of glaucoma suspect and glaucoma eyes [136]. Another recent study compared the diagnostic performance for glaucoma detection between OCTA vessel density measurements (defined using a non-commercially available method) and structural OCT RNFL thickness measurements. Results indicated that vessel area density measurements had a significantly smaller area under the receiver operating characteristic curve for classifying healthy versus glaucoma eyes than RNFL thickness measurements [137], in disagreement with several similar studies. However, classification performance of healthy and glaucoma suspect eyes were similar for vessel area density and RNFL thickness measurements. There is also evidence that deep-layer microvasculature dropout of the parapapillary choriocapillaris or microvasculature within the sclera occurs in eyes with glaucoma and is associated with more severe visual field damage and occurs more frequently in eyes with disc hemorrhages and RNFL thinning [63, 138–140].

While findings from the literature comparing OCTA and structural OCT measurements for the detection of glaucoma and evaluation of structurefunction association are divergent, the potential ability to elucidate the temporal sequence of vascular changes and optic nerve damage in glaucoma may pave the way for better understating and management of the disease.

During the past three decades, advancements in ophthalmic diagnostic imaging technologies have enabled the ability to detect glaucomatous neuropathy at early stages of disease. These advanced imaging technologies provide large amounts of reproducible data, allowing clinicians to discriminate between normal and glaucomatous optic nerves in a more systematic way. From the earlier stages of cSLO imaging to the more recent implementations of OCT technology, several decades of technical and clinical work have been united and integrated in a high-technology solution to give physicians the ability to look deep in the pathological process of glaucoma and to develop better diagnostics and therapeutic strategies. The development of ophthalmic diagnostic imaging technology and the continued efforts to enhance it continues to benefit patients that suffer from glaucoma.

Gerhard Zinser, PhD was an inspiration to us all. He was a visionary in ophthalmic imaging and an innovative scientist who pursued collaborations and supported new ideas from investigators and clinicians from all over the world. He recognized the importance of designing instruments that produce the best quality scans possible, while also making sure users understood the strengths and limitations of the technology. I have strong memories of sitting for hours with Gerhard and our reading center team in San Diego reviewing images, developing and refining quality control criteria and discussing strategies for analysis to ensure that the highest quality information would be available for the Diagnostic Innovations in Glaucoma Study (DIGS), the African Descent and Glaucoma Evaluation Study (ADAGES) and the Ocular Hypertension Treatment Study (OHTS). Our research was accelerated through his support of promising research in our laboratory, and numerous discussions about how to translate the results into clinical care. His passing is a great loss to the vision science community. (Linda Zangwill, PhD)

**Acknowledgement** Supported in part by National Institutes of Health/National Eye Institute grants R01 EY029058 (RNW), R01 EY027510 (LZ), R01 EY011008, R01 EY014267, Core Grant P30EY022589, and by an unrestricted grant from Research to Prevent Blindness (New York, NY).

#### **References**


cialists in assessing optic disc notching. Trans Am Ophthalmol Soc. 2001;99:177–84.


Group. Baseline topographic optic disc measurements are associated with the development of primary open-angle glaucoma. Arch Ophthalmol. 2005;123:1188–97.


dinally measured optic disc change in glaucoma. Ophthalmology. 2009;116(11):2110–8.


coherence tomography optic disc margin anatomy. Invest Ophthalmol Vis Sci. 2009;50(10):4709–18.


over time? Longitudinal analysis using san diego automated layer segmentation algorithm (SALSA). Invest Ophthalmol Vis Sci. 2016;57:675–82.


domain-optical coherence tomography in advanced glaucoma. J Glaucoma. 2014;23:341–6.


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **OCT Angiography (OCTA) in Retinal Diagnostics**

Roland Rocholz, Federico Corvi, Julian Weichsel, Stefan Schmidt, and Giovanni Staurenghi

#### **6.1 Introduction**

Optical coherence tomography angiography (OCTA) is a non-invasive imaging technique which can be used to provide three-dimensional visualization of perfused vasculature of the retina and choroid [1, 2]. In contrast to standard structural optical coherence tomography (OCT, see Chap. 5), OCTA analyzes not only the intensity of the reflected light but also the temporal changes of the OCT signal. Based on repeated OCT section images (B-Scans) from the same location of the retina, it is possible to separate the temporal signal changes caused by moving particles (such as erythrocytes flowing through vessels) from other sources of signal change (i.e. eye motion or noise in the OCT signal). Thereby image contrast between perfused vessels and static surrounding tissues can be created as illustrated in Fig. 6.1.

Using dense volume scans, it is possible to obtain OCTA images that are similar to fluorescence angiography images, which are the clinical gold standard. In contrast to fluorescence angiography, OCTA has the advantage of not requiring any dye injection. Moreover, while fluorescence angiography provides only two-dimensional images of the fundus, OCTA enables the visualization of structure and blood flow within the vitreous, the retina, and the choroid, separately (see Sects. 6.2.2 and 6.2.3). Using appropriately adjusted segmentation boundaries, it is also possible to examine the distinct capillary networks of the retina (with vessel diameters as small as approx. 8 μm) [3]. The definition of the separating boundaries has evolved since the introduction of OCTA in the clinical practice and is described in Sect. 6.2.4.

Various OCTA algorithms have been proposed and utilized in research and in clinical devices for OCTA image construction (see Sect. 6.2.1). Therefore OCTA images from different devices vary in appearance [4, 5], which may result in different clinical diagnostic interpretations. While each unique OCTA algorithm is subject to slightly different limitations that are attributed to its overall approach, there are certain confounding factors and/or limitations that impact all algorithms and are innate characteristics of this imaging modality [5]. These factors include, but are not limited to, reduced light penetration in deeper layers and image artifacts projected from more superficial layers to deeper ones. Artifacts can originate from image acquisition, eye motion,

<sup>©</sup> The Author(s) 2019 135 J. F. Bille (ed.), *High Resolution Imaging in Microscopy and Ophthalmology*, https://doi.org/10.1007/978-3-030-16638-0\_6

R. Rocholz (\*) · J. Weichsel · S. Schmidt Heidelberg Engineering GmbH, Heidelberg, Germany

F. Corvi · G. Staurenghi Eye Clinic, Department of Biomedical and Clinical Science "Luigi Sacco", Sacco Hospital, University of Milan, Milan, Italy

**Fig. 6.1** Example of how the OCT signal intensity changes over time, after bulk motion correction. (**a**, **b**) Structural OCT images were acquired with a time difference of 8 ms. The location of a larger blood vessel (yellow circle) and of static tissue (blue circle) is indicated in both

images. (**c**) Upon magnification of these areas and calculation of the absolute differences, larger signal changes can be seen within the blood vessel compared to the static tissue

image processing, and display strategies [5]. Section 6.3 describes some of the major artifacts related to OCTA as well as state-of-the-art countermeasures.

Section 6.2.5 briefly introduces OCTA metrics, which are intended for quantitative evaluation of OCTA data. Such numerical aggregates of the image data enable an objective analysis of disease progression and statistical conclusions in larger studies of diseases.

With fluorescence angiography, namely Fluorescein Angiography (FA) and Indocyanine Green Angiography (ICGA), dynamic phenomena such as dye leakage, pooling, and staining can additionally be observed. These phenomena cannot be observed with OCTA because no motion of blood cells is involved. While these phenomena are also used in clinical diagnosis [6], retinal pathology can also be obscured by leakage or hemorrhage. In contrast, OCTA can generate high contrast, well-defined images of the microvasculature below areas of leakage or hemorrhage [7]. Therefore, dye-based angiography and OCTA are giving complementary information. To illustrate the similarities and differences of OCTA with respect to the gold standard dye-based angiography, Sect. 6.4 provides side-by-side comparisons for clinical cases of diabetic retinopathy, retinal vein occlusion, macular telangiectasia, and age-related macular degeneration.

#### **6.2 Technical Foundation for Clinical OCTA Imaging**

OCT systems typically produce section images as shown in Fig. 6.1. Such images are commonly referred to as B-Scans. As can be seen in Fig. 6.1a, the B-Scans show a grainy pattern, also known as speckle pattern. These speckles are inherent to the interferometric OCT measurement. If two B-Scans are taken from the very same location of the retina (cf. Fig. 6.1a, b), the speckle pattern at locations of static tissue basically stays the same. In contrast, at locations of perfused blood vessels, the speckle pattern changes over time. The basic principle of OCTA is therefore to analyze the temporal variation of the OCT signal in order to derive an image of the perfused retinal vasculature.

To allow for the creation of images similar to fundus images from standard angiography, volume scans are performed. This means, that multiple adjacent B-Scans are acquired to cover extended regions of the retina. Eventually, these B-Scans are combined to form a three-dimensional sample of the retinal structure and blood flow.

#### **6.2.1 OCTA Signal Processing and Image Construction**

In OCTA imaging, several OCT B-Scans of the same retinal cross section are acquired repeatedly in short succession. Within this retinal cross section, at locations of static tissue, the microscopic configuration of illuminated scattering particles in the beam focus is well preserved over sequential acquisitions and consequently yields a consistent OCT signal over time. Contrarily, at locations where directed motion is present in the sample, like in retinal blood vessels, the scattering particles are continuously replaced by other particles in subsequent acquisitions. This continuous exchange of microscopic particles modulates the OCT signal and introduces an additional source of variability to the repeated measurements. Overall, the smaller the remaining portion of conserved scattering particles in the beam waist, the higher the variability in the OCT signal of sequential acquisitions. Maximum OCT signal variability is observed when the scattering particles are completely replaced by others in subsequent measurements. Clinical OCTA imaging on current commercially available OCT device hardware is typically operating in this regime, as the physiological blood flow speeds in most of the perfused retinal vasculature [8] significantly exceeds the velocity limit (i.e. typically only few mm/s) determined by the product of OCT beam waist diameter and scan repetition rate. Hence, it is unlikely to observe the same red blood cell configuration in two successive B-Scans. This, however, is a necessary requirement for deriving quantitative blood flow velocity measurements from the OCT signal variation in repeated scans (cf. Chap. 7). Accepting this current technical limitation of clinical OCT devices, OCTA algorithms for signal construction rather focus on reliably differentiating locations of significant blood flow from static tissue, instead of measuring blood flow velocity quantitatively. In this context, OCTA image construction is a quasibinary classification problem with the goal to optimally distinguish significant flow from static tissue at each location of the retina.

Different algorithmic strategies have been suggested in the past for optimally addressing this classification task. While some algorithms are using exclusively either amplitude or phase of the complex OCT signal, others are combining information from both. As the microscopic pattern of illuminated particles in the sample and its detailed configurational change according to motion is practically inaccessible to measurement, its influence on the resulting OCT signal is typically rather considered probabilistically as a stochastic contribution to the overall measurement. OCTA algorithms are thus quantifying the amount of variability in the random realizations of the measured OCT signal from repeated acquisitions in one or the other way. Practically, for instance the temporal correlation [3] or the overall variance [4] of the signal or other more involved statistical parameters [2] with an expected relation to the OCT signal variability are assessed in different approaches. These statistical parameters, potentially after additional post-processing and contrast enhancement, are subsequently taken as the resulting OCTA signal in arbitrary units. As an alternative to these statistical parameter based OCTA signal construction methods, underlying probabilistic statistical models for the random OCT signal from sample locations with and without directed flow can be derived from theory and experiment [9–11]. Based on these underlying models and the repeated OCT signal observations, a probability for being static (versus in flow) can be assigned to each measured location. This holds the advantage of yielding easily interpretable probability values and no further contrast enhancement of the resulting OCTA signal is needed.

#### **6.2.2 OCTA Data Visualization**

The acquisition of OCTA volume scans yields three-dimensional data of the retinal structure and blood flow. To visualize and analyze such rich datasets, different image representations are used, as illustrated in Fig. 6.2. To separately visualize the vascular networks (or plexuses) in the retina (cf. Sect. 6.2.4) and pathological alterations in normally avascular tissue, the review of OCTA data is typically based on so-called *en face* images that are generated from within slabs of the acquired volume. In this context, the term slab refers to a section of finite axial extent in the volumetric data delimited by an anterior and a posterior boundary surface. These slab boundaries are usually determined by layer **Fig. 6.2** Visualization of OCT angiography data is mainly based on *en face* images and section images. (**a**) *En face* image of the structural OCT data within the superficial vascular plexus. In the background, an infrared cSLO fundus image is shown. (**b**) *En face* image of the corresponding OCTA data. (**c1**–**c3**): OCT/ OCTA fusion images of a section along the fast scanning axis (B-Scan direction, green). (**c1**) Section image shows structural OCT in the background and the OCTA data as yellow overlay. (**c2**) Same as (**c1**), but the structural OCT data is faded out. (**c3**) Same as (**c1**), but the OCTA data is faded out. (**d**) OCT/OCTA fusion image where section is along the slow scanning axis (orthogonal to B-Scan direction, blue)

**d**

segmentations of the structural OCT data (see Sect. 6.2.4). The OCTA signal between the two boundaries is accumulated in axial direction using different projection methods (see Sect. 6.2.3) and is displayed as a two-dimensional image. The resulting images give the impression of looking onto the retina and are therefore referred to as *en face* images or transverse section images. An example is given in Fig. 6.2a, showing the *en face* image of the structural OCT, and Fig. 6.2b, showing the respective *en face* image of the OCTA signal.

In addition to *en face* images, section images are used for review of the spatial relationship of retinal structure and blood flow. The section images may have the same orientation as the originally acquired B-Scans (Fig. 6.2c) but may also be arbitrarily oriented within the volume; for instance Fig. 6.2d shows a section orthogonal to the original B-Scans. To provide a direct visual correlation of structural and flow information, structural OCT section images and the corresponding OCTA blood flow information at the same location can be superimposed; see Fig. 6.2c1–c3, d.

#### **6.2.3 Projection Methods**

The generation of two-dimensional *en face* images from the three-dimensional data, *OCTA*(*x*, *y*,*z*), employs a projection along the *z*-direction (i.e. axial direction). Common projection methods are the mean projection, μ(*x*, *y*), and the sum projection, *s*(*x*, *y*), which can be discretized at voxel-level and written as

$$\mu\left(\mathbf{x},\boldsymbol{\upchi}\right) = \frac{1}{z\_l\left(\mathbf{x},\boldsymbol{\upchi}\right) - z\_u\left(\mathbf{x},\boldsymbol{\upchi}\right)} \sum\_{z=z\_l\left(\mathbf{x},\boldsymbol{\upchi}\right)}^{z\_u\left(\mathbf{x},\boldsymbol{\upchi}\right) - 1} OPT\left(\mathbf{x},\boldsymbol{\upchi},z\left(\mathbf{x},\boldsymbol{\upchi}\right)\right)$$

$$s\left(\mathbf{x},\boldsymbol{\upchi}\right) = \sum\_{z=z\_l\left(\mathbf{x},\boldsymbol{\upchi}\right)}^{z\_u\left(\mathbf{x},\boldsymbol{\upchi}\right) - 1} OPT\left(\mathbf{x},\boldsymbol{\upchi},z\left(\mathbf{x},\boldsymbol{\upchi}\right)\right)$$

where *zl*(*x*, *y*) is the posterior slab boundary surface, and *zu*(*x*, *y*) is the anterior slab boundary surface, so that the slab comprises all data with *z* ∈ (*zu*,*zl*] ⊂ *ℤ*. Note, the only difference between these two projection methods is the normalization factor. The mean projection is normalized by the local slab thickness, *d*(*x*, *y*) ≡ *zl*(*x*, *y*) − *zu*(*x*, *y*), while the sum projection is not normalized. This has important implications for the visual interpretation of the resulting *en face* images and also for the use of these images in OCTA analytics. To appreciate the differences, assume that the OCTA algorithm achieves a perfect result without any artifacts or noise, i.e. *OCTA*(*x*, *y*,*z*) is 1 for any voxel corresponding to a perfused retinal location and 0 for any voxel corresponding to a nonperfused location. In this idealized setting μ(*x*,*z*) can be interpreted as a density or fill-factor of perfused vessels in the slab, i.e. the ratio of perfused voxels to all voxels in the depth range from *zl*(*x*, *y*) to *zu*(*x*, *y*).The sum projection, *s*(*x*, *y*), corresponds to the total lumen of perfused vessels in the given depth range. Both measures can be meaningful, depending on how the images are read or further processed. The density of perfused vessels in a given slab may give direct insight into the relative oxygenation of this slab. But due to laterally varying slab thickness, *d*(*x*, *y*), the contribution of small capillaries in the mean projection also laterally varies within the same slab. This is illustrated in Fig. 6.3. The contribution to the mean projection is greater if a capillary is located somewhere with small slab thickness (Fig. 6.3b) as compared to a same sized capillary at a location of greater slab thickness (Fig. 6.3c). Considering that the slab thickness might also vary over time (due to swelling of the retina or treatment of fluid accumulations), the contribution of capillaries in the mean projection also changes over time, even if the capillary's perfusion is unchanged. In contrast, the sum projection always gives equal weight for each voxel, independent on the local slab thickness. This means that all capillaries of a given diameter contribute equally to the sum projection. On the other hand, using the sum projection for slabs with laterally varying thickness may give the false impression of non-perfusion at locations where the slab is thin and only comprises single capillaries, when compared to a location where the slab is thick enough to embrace more than one layer of capillaries. To avoid this false impression, it is important to realize that the sum projection is related to the perfused vessel lumen at each lateral location, which is naturally confined by the local slab thickness.

The previous examples were given to illustrate the difference when reading the *en face* images. For OCTA analytics based on *en face* images there are further aspects that need to be

**Fig. 6.3** Comparison of OCTA sum and mean projection. (**a**) Fusion image with SVC slab boundary segmentation (red lines). (**b**, **c**) Detailed fusion images show the physiological thinning of the superficial vascular plexus from

the macula to the periphery. (**d**) Sum projection of the SVC slab. (**e**) Mean projection of the same slab. Blue and orange arrows show the location of (**b**) and (**c**), respectively

considered. When using the mean projection as a measure for vessel density within several slabs of varying thickness, it is not possible to directly compare the results because voxels were given different weights in the different slabs. Also averaging of mean projections from different slabs will not give the same result as the computation of the mean projection of the combined slab.

#### **6.2.4 Retinal Vascular Plexuses**

The retinal vascular network in the human eye is axially divided into four distinct capillary plexuses. While the vasculature within each plexus is densely linked, interconnecting vessels between these sub-networks are sparse in comparison. Each separate capillary plexus holds characteristic morphometric features that have been confirmed ex-vivo in confocal microscopy [3, 12] as well as by using OCTA in vivo [13, 14]. From the anterior boundary of the retina to more posterior axial locations, the four distinct plexuses are, nerve fiber layer vascular plexus (NFLVP), superficial vascular plexus (SVP), intermediate capillary plexus (ICP), and deep capillary plexus (DCP); compare Fig. 6.4.

While the three deeper layers can be observed and axially separated even in the periphery of the retina using OCTA at sufficient axial resolution, the NFLVP is most pronounced at locations where the nerve fiber layer holds a substantial width, like in the peripapillary region as well as parafoveally (Fig. 6.5) [14].

In order to accurately detect and manage retinal vascular conditions, it is important to precisely discern the different retinal vascular plexuses. It is also important that slabs enable a continuous representation of the retinal and choroidal vasculature so that possible vascular abnormalities are not missed during image review. Currently, conflicting definitions of the axial location of bound-

**Fig. 6.4** Definition of the slab boundaries. Left: Schematic figure of the layers and vessel networks in the human retina (www.he-academy.com/Retinal-Layers-Interactive). Right: Schematic figure of the slab definitions. *SVC* superficial vascular complex, *NFLVP* nerve fiber layer vascular plexus (part of SVC), *SVP* superficial vascular plexus (part of SVC), *DVC* deep vascular complex, *AC* avascular complex, *ICP* intermediate capillary

plexus (part of DVC), *DCP* deep capillary plexus (part of DVC), *CC* choriocapillaris. Figure modified from https:// www.heidelbergengineering.com/download.php?https:// media.heidelbergengineering.com/uploads/Products-Downloads/210111-001\_SPECTRALIS\_Tutorial\_ SPECTRALIS-OCTA-Principles-and-Clinical-Applications\_EN.pdf [16]

aries between retinal plexuses make the direct comparison of *en face* images from different devices difficult [16].

To best separate distinct capillary plexuses within the deep vascular complex, Campbell et al. suggested to define optimum slab boundaries based on the location of minima of the axial flow density profiles [13]. They introduced boundary definitions relative to the thickness of the segmented retinal layer. As an alternative to this approach, a subsequent study using full spectrum OCTA at higher axial resolution found that it is also possible to define these interfaces between the three deeper layers at constant absolute offsets to the retinal IPL-INL interface (Fig. 6.5) [14]. This conveniently reduces the number of required retinal segmentations that are necessary for creating individual *en face* visualizations of these plexuses (cf. Fig. 6.6c, d).

#### **6.2.5 Quantification of OCTA Data**

For objective assessment of disease progression and its documentation, and to enable comparisons to normative data, a concise summary of the

Temporal Displacement from Fovea along Fovea-BMOC Axis (degrees)

**Fig. 6.5** (**a**) Representative *en face* OCTA image of the superficial retinal vasculature from the optic nerve head across the fovea to the temporal periphery in a healthy eye. (**b**) Heat map of the OCTA signal in depth averaged over 22 healthy eyes, spatially averaged within the transparent red overlay displayed in (**a**). Hot locations (white) indicate strongest OCTA signal while cold locations (black) indicate minimal OCTA signal. Up to four axially distinct capillary plexuses can be detected in the peripap-

image data in terms of numerical measurements is desirable. Such numeric parameters, describing the structure of the vasculature network as derived from the OCTA images, are also referred to as "OCTA metrics" or "OCTA analytics". Clearly, this must not be confused with OCT-based flowmetry (subject of Chap. 7), where physical blood flow velocity is measured quantitatively.

Typical examples of OCTA metrics parameters, quantifying static structural aspects of the

illary region as well as parafoveally. At peripheral temporal locations, three distinct plexuses are separated. The retinal layer interface between IPL and INL, when shifted anteriorly and posteriorly by constant appropriate distances, represents a conveniently defined separating boundary for visualizing the three deeper plexuses independently within *en face* projections. Image from [14] reproduced without changes according to license http:// creativecommons.org/licenses/by/4.0/

eye's vasculature, are various vessel density measures (vessel/perfusion density, binarized vessel density or vessel area density, skeletonized vessel density or vessel length index). These parameters are suitable for capturing dropout of vasculature that occurs in diseases like diabetic retinopathy, retinal vein occlusion or glaucoma [17–20]. A quantification of flow void area is also possible, for instance for assessing the choriocapillaris structure [21].

**Fig. 6.6** Comparison of two examinations of the same eye. (**a**) *En face* image of the superficial vascular plexus (SVP) from 30° × 15° scan acquired with resolution of 11 μm/pixel, providing a large field of view. (**b**–**d**) *En face* images acquired of a 10° × 10° scan with 5.7 μm/pixel resolution. (**b**) The small capillaries are better resolved in the SVP *en face* image of high resolution scan, compare yellow outline in A which shows the same region of the

Besides vessel density, other parameters that summarize morphological features of vessel branches and vessel network structure, including complexity measures such as vessel tortuosity, fractal dimension, or branching point densities and vessel diameter statistics, are in common use [17– 19, 22, 23]. These measures aim at capturing pathological alterations of vessel shape and spatial arrangement, as occurring for example in diabetic

same eye. (**c**) The intermediate capillary plexus (ICP) can be clearly distinguished from the deep capillary plexus (**d**) due to the high axial resolution of ~3.9 μm/pixel (SPECTRALIS OCTA). The ICP and DCP vessel networks show clearly distinct geometric structures. In the DCP, star-like vascular intersections can be discerned which may represent a connection to the venous superficial network

retinopathy, macular telangiectasia, or neovascularization in age related macular degeneration.

These characteristics may be analyzed for the whole scan area, or alternatively as aggregates over sectors defined by specific grids (e.g. ETDRS grids), which are usually adapted to the eye anatomy and allow for the detection of spatially localized changes over time or deviations from the statistics of normal reference data.

In currently available approaches, vessel density measures are interpreted as a two-dimensional density, i.e. the fraction of area of a slab projection occupied by detected vessels (typically after applying a thresholding operation). As long as the instrument is able to axially resolve the different layers of capillaries that can be anatomically distinguished, quantitative OCTA parameters can be derived for each of them independently. To suppress the influence of larger vessels, two approaches are commonly used: Either slab projections of vessels are reduced to their centerlines (i.e. "skeletonization"), or larger vessels are simply masked out. This emphasizes thinner capillaries in the analysis.

Further quantitative parameters derived from OCTA data are area and shape measurements of specific vasculature regions, in particular the foveal avascular zone (FAZ) [19, 22] or segmented neovascular lesions [24].

There is high interest to use results from quantitative OCTA parameters as endpoints in clinical studies [25–27]. For this purpose, OCTA metrics need to be both repeatable and reproducible. Therefore, initial studies mostly focused on analyzing the robustness of the measurements. Errors in scan geometry, evaluation grid placement, layer segmentation, and parameters such as variable signal strength can negatively impact the measurement precision. Furthermore, data from instruments of different vendors are not directly comparable [28, 29]. This is due to differences in resolution as well as signal generation and postprocessing algorithms such as filtering, artifact suppression and thresholding [30]. Also differences in the slab definitions (cf. Sect. 6.2.4) and layer segmentation results of different devices need to be carefully taken into account when comparing images or quantitative analysis results across devices.

#### **6.3 Image Artifacts and Countermeasures**

#### **6.3.1 Projection Artifacts**

The OCT light passage through larger superficial vessels can introduce disturbances to the probing beam in deeper retinal layers. The light fluctuates because it has passed through and is altered by moving blood cells above. Current OCTA algorithms cannot distinguish these signal fluctuations from the fluctuations of moving blood cells in the deeper layer. This gives rise to apparent replications of superficial vessels in posterior layers. These erroneous replications are referred to as OCTA projection artifacts [5]. Projection artifacts are introduced during data acquisition and are not a consequence of the projection method for *en face* image generation. While projection artifacts were understood as the most confounding factor early on in the inception of OCTA technology [5, 7], this limitation has been addressed in current state-of-the-art devices by means of a post-processing step which is referred to as projection artifact removal [4, 31, 32]. Typically, in this step either the undesirable replication is subtracted from *en face* projections of the deeper layer by algorithmic comparison to its source in the superficial vasculature or the deeper axial signal is suppressed at selected locations based on heuristic rules [32]. In general, projection artifacts are more prominent in deeper layers of high signal intensity, such as the retinal pigment epithelium (RPE). Therefore, it is important that the projection artifact removal does not lead to disruption of the visualization of structures such as choroidal neovascularization (CNV) which may grow through the RPE. For example in Fig. 6.7a, various large vessels seem to be connected to a large CNV and the actual extent cannot be easily assessed. With projection artifact removal, Fig. 6.7b, the artificial replication of the vessels from superficial layers can be removed without disrupting the visualization of the pathology.

#### **6.3.2 Segmentation Artifacts**

Considering that slabs are mainly defined by automatically segmented retinal layer boundaries, careful review of the segmentation is critical for correct interpretation of the *en face* projections. Segmentation failures are especially common in diseases where the appearance and shape of retinal layer is altered. For instance, intrareti-

**Fig. 6.7** A subject with a large type 1 CNV was examined with 11.4 μm/pixel resolution using SPECTRALIS OCTA. (**a**) OCTA image without projection artifact removal, various large vessels seem to be connected with

the neovascularization. (**b**) After removal of the projection artifact, the actual size and the extent of the neovascularization can be assessed

nal fluid, large pigment epithelial detachments, choroidal neovascularization, and certain atrophies often cause segmentation errors in state-ofthe-art OCTA devices. A manual correction of such errors is cumbersome, if the correction is based on individual B-Scans within the dense volume. To facilitate and speed up the process of correcting compromised slab boundaries, interactive segmentation correction tools have been introduced. These tools propagate manual corrections of only a few B-Scans to the remainder of the volume, by fusing automatic segmentation results with these user-provided hints [33]. These countermeasures are merely a workaround until more robust segmentation methods for the vascular networks are available.

#### **6.3.3 Motion Artifacts**

Adequate compensation of eye motion is one of the most critical aspects of OCTA acquisition. In order to detect temporal changes in the OCT signal related to blood flow in capillaries, eye motion needs to be detected and compensated for very accurately. Therefore sampling schemes and different eye tracking implementations play a crucial role for each device's overall performance of blood flow visualization at the capillary level.

Eye movements affect the acquisition of OCTA data in two distinct ways. First of all, the detection of blood flow related changes in the OCT signal from repeated B-Scans requires spatial overlap of these scans (see Sect. 6.2.1). This means, that the scans must overlap within the lateral width of the probing beam, which is typically in the order of 15 μm. Secondly, larger eye movements can lead to geometrical distortion or missing data in the OCTA volume scans. The volume scans are typically acquired within several seconds. On this time scale, it is very likely that larger eye movement occurs due to saccades, change of fixation, or change of head pose.

Slow eye drifts within the B-Scan plane can be compensated for by image registration of the successive B-Scans, effectively removing the temporal signal change in regions of static tissue (cf. Fig. 6.1). Larger eye motion (saccades) or eye motion perpendicular to the B-Scan plane can strongly deteriorate the OCTA signal, because of insufficient spatial overlap of the B-Scan samples. This leads to missing data and geometrical distortions in the OCTA volume scans, if no countermeasures are taken. The goal of any motion artifact compensation is to ensure that OCTA data acquired from visit to visit is devoid of errors that may reduce the precision of quantitative change analyses.

There are two major approaches to mitigate motion artifacts in OCTA. One approach is to acquire several independent OCTA volume scans and combine the information in post-processing [34–36]. The second approach is using real-time tracking.

The post-processing approach has the advantage of relatively short acquisition times. In practice, often only two volume scans with perpendicular orientation of the fast scanning axis are used [36]. In effect, the resulting volume is obtained by interpolation and averaging of the different input volumes. Larger data gaps from one volume are normally filled up with information from the other volume, but in general, there is no guarantee to obtain distortion-free results [5].

The real-time tracking approach employs accurate measurements of the eye motion in realtime (see also Chap. 3). B-Scans which are affected by too strong motion are re-acquired. During periods of slow eye motion (eye drifts) the real-time eye motion measurements can also be used to actively control the OCT scanners to keep the beam on the nominal scanning path. Due to the data filtration in the event of strong eye motion, the real-time tracking approach is sometimes slow in acquisition. However, using realtime tracking, the obtained volumes are geometrically accurate and uniformly sampled without gaps from missing data.

#### **6.3.4 Lateral and Axial Resolution**

The lateral resolution of an OCT system is determined by both the optical point spread function (PSF) as well as sampling density, i.e. the digital resolution. The OCTA signal from one voxel can be seen as a mixture of contributions from scatterers within the support of the combined (optical and digital) PSF. The larger the PSF, the more scatterers contribute to the mixture so that is becomes more challenging to separate the individual contributions, in particular the components due to flowing scatterers (blood cells) from the static components (tissue) according to their discriminating statistics.

The optical PSF is influenced by the imaging system as well as the imaged eye. Poor adjustment of the instrument's focus as well as aberrations in the eye widens the PSF and lead to suboptimal signal separation. Similarly, using wide-field optics with smaller numerical aperture as well as less dense digital sampling in favor of covering larger fields of view compromises the ability to resolve small capillary details. The effect of the sampling density on the visibility of small capillaries is illustrated in Fig. 6.6, compare a and b.

The axial resolution in OCT is independent of the lateral resolution and is determined by the spectral bandwidth of the light source [37]. However, using split-spectrum approaches [34] for OCTA processing, an algorithmic trade-off can be made between axial resolution and signal to noise ratio. Splitting the spectrum allows to obtain, from a single scan, several B-Scan sections of lower axial resolution (due to the lower bandwidth of the spectral sub-bands), which are then used as additional samples with independent shot noise contributions for improving the OCTA signal. This loss of resolution may impede the ability to separate axially closely spaced but distinct vascular layers, in comparison to algorithms that maintain the axially high optical resolution of the underlying OCT signals (cf. Fig. 6.6c, d) [14].

#### **6.4 Clinical Application of OCTA**

#### **6.4.1 Diabetic Retinopathy**

Diabetic retinopathy (DR) is classified into different stages according to stereoscopic color fundus photographs. The typical features of DR are microaneurysms, intraretinal hemorrhages, intraretinal microvascular abnormalities, venous beading, cotton wool spots, hard exudates and neovascularization.

The gold standard imaging technique for the assessment of macular perfusion is fluorescein angiography [38]. It is able to show leaking microaneurysms and capillary non-perfusion. However, it produces only two-dimensional images in which fluorescence signals of the superficial and deep capillary networks overlap and are difficult to distinguish, especially when the dye leaks [38, 39]. In this context, OCTA is a useful imaging technique for the assessment of retinal vasculature in the different capillary networks.

Fluorescein angiography is a dynamic examination that can be used to accurately evaluate the perfusion status of the retina at the posterior pole and in the far periphery. In contrast, the application of OCTA in the assessment of peripheral vasculature is quite controversial. In fact, one of the main deficiencies of this imaging technique is related to its limited field of view suggesting a use for diseases affecting the macular region. However, several features of DR have the potential to be imaged and more accurately classified with OCTA as it offers the opportunity to show various retinal vascular layers [40, 41]. In fact, OCTA is currently being included into clinical trials of patients with DR. There may be prospects to describe the evolution of the DR as OCTA offers the opportunity to evaluate the quantification of perfusion through vascular density maps, and the potential for feature identification such as identifying microaneurysms or specified regions of non-perfusion (Figs. 6.8, 6.9, and 6.10) [7, 17].

#### **6.4.2 Retinal Vein Occlusion**

Retinal vein occlusion is the second most prevalent blinding vascular retinal disorder [42, 43]. The main types are central retinal vein occlusion (CRVO) and branch retinal vein occlusion (BRVO), depending on the site of obstruction [44]. Examples are shown in Figs. 6.11 and 6.12.

The ophthalmoscopic signs are venous dilation, tortuosity, intraretinal hemorrhage, retinal edema and hemorrhage in the vicinity of the occluded vein. Fluorescein angiography typically reveals delayed filling in the distribution of the involved retinal vessels. The veins are dilated and tortuous. There is leakage from capillaries with dye accumulating in the substance of the retina or within cystoid spaces. Fluorescein angiography has been the gold standard for visualizing retinal non-perfusion areas which appear as dark areas in the images [38]. Retinal neovascularization can be identified by leakage of the fluorescein dye. Fluorescein angiography has been critically important for the diagnosis and the treatment decision.

OCTA shows the areas of vascular nonperfusion, the dilated tortuous venous segments, the microvascular abnormalities and the neovascularization. However, one of the most sight threatening complications is macular edema. In this context, it is crucial to verify the segmentations used for OCTA visualization [45]. The difficulty with *en face* imaging is that selective enlargement of retinal layers occurs and these layers are typically not segmented correctly. The resultant *en face* vascular images may not show the correct representations of the actual flow characteristics.

#### **6.4.3 Macular Telangiectasia**

Idiopathic perifoveal or juxtafoveolar retinal telangiectasia are retinal capillary ectasia limited to the perifoveal area without any apparent specific cause [46].

Initially, idiopathic macular telangiectasia were divided into four groups according to the Gass classification [47]. Subsequently, Yannuzzi proposed a new classification in which type 1 was defined as aneurysmal telangiectasia and type 2 as perifoveal idiopathic macular telangiectasia [48].

Type 1 is closely related to Coats disease, or more specifically a milder form of Coats previously known as Leber miliary aneurysms. It generally involves only one eye, and both the peripheral retina and macula can be affected. Fluorescein angiography reveals telangiectasia and multiple capillary, venular and arteriolar aneurysms with late leakage. Type 2 are bilateral, temporal and symmetrical, however, there have been reports of unilateral, asymmetric, and asymptomatic cases [49]. It has been hypothe-

**Fig. 6.8 Multimodal imaging of diabetic retinopathy.** Fluorescein angiography (**a**, **b**) and the corresponding optical coherence tomography angiography images segmented at the superficial retinal capillary plexus (**c**) and the deep capillary plexus (**d**). Fluorescein angiography revealing vascular changes with microaneurysms and an area of non-perfusion in the temporal part. Optical coherence tomography angiography showing microaneurysms and a similar aspect of the area of non-perfusion

**Fig. 6.9 Multimodal imaging of macroaneurysm.** Infrared reflectance (**a**) showing fundus abnormalities on a temporal vascular arcade corresponding to vascular dila-

tion on OCT (**b**). Fluorescein angiography (**c**, **d**) and OCTA (**e**) revealing the macroaneurysm with a similar representation

**Fig. 6.10** Multimodal imaging of diabetic retinopathy. OCTA images segmented at the superficial retinal capillary plexus (**a**, **b**) and the deep capillary plexus (**c**, **d**), the corresponding fluorescein angiography in the early (**e**, **f**) and more advanced phase (**g**, **h**) and the corresponding

B-Scan (**i**, **j**). OCTA showing microvascular changes, microaneurysms, vascular dropout and a ragged appearance of the foveal avascular zone. Some microaneurysms are seen on the fluorescein angiography but not on the OCTA images

**Fig. 6.11** Multimodal imaging of retinal vein occlusion. Fluorescein angiography (**a**) and the corresponding optical coherence tomography angiography images segmented at the superficial retinal capillary plexus (**b**) and the deep capillary plexus (**c**). Fluorescein angiography revealing vascular changes and area of non-perfusion in the temporal part of the retina. Optical coherence tomography angiography showing well the area of vascular nonperfusion with a similar appearance of fluorescein angiography

**Fig. 6.12** Multimodal imaging of retinal vein occlusion. Fluorescein angiography (**a**–**c**) and optical coherence tomography angiography (**b**, **d**) showing vascular

changes, areas of non-perfusion and neovascularizations. The B-Scans (**e**, **f**) reveal the retinal neovascularizations projecting into the vitreous space

sized that the primary involvement is the alteration of Müller cells and secondary the vascular and tissue remodeling. Fluorescein angiography displays the capillary telangiectasia with dilated and blunted retinal venules at right angles into the temporal parafoveolar area.

OCTA shows the prominent right-angle veins with the distortion of the foveal avascular zone with cavitations. Idiopathic macular telangiectasia could be complicated by retinal choroidal anastomosis, visible in indocyanine green angiography and OCTA (Fig. 6.13).

#### **6.4.4 Age Related Macular Degeneration**

Several studies have shown the utility of OCTA in the diagnosis and monitoring of age related macular degeneration (AMD). The advent of OCTA offered the opportunity to non-invasively visualize the neovascular networks and correlate the OCTA appearance with the standard imaging techniques and observe new findings.

In this context, age related macular degeneration is characterized by different neovascular lesions. Type 1 neovascularization occurs between the retinal pigment epithelium and Bruch's membrane, while Type 2 neovascularization is characterized by the growth of the neovascular tissue through the retinal pigment epithelium-Bruch's membrane-choriocapillaris complex to the subretinal space [50, 51]. OCTA is able to show the neovascular network with a similar appearance to indocyanine green angiography. However, indocyanine green angiography is a dynamic examination that may reveal the feeder vessel of the lesion, while OCTA is not able to provide a dynamic representation of blood flow and to detect the feeder vessel (Fig. 6.14, 6.15, 6.16, and 6.17).

**Fig. 6.13** Multimodal imaging of Type I idiopathic macular telangiectasia. Fluorescein angiography (**a**, **b**) and indocyanine green angiography (**c**, **d**) showing the dilated telangiectatic perifoveal vessels with leakage in the late

phase. OCTA at the superficial retinal capillary plexus (**e**) and deep (**f**) retinal capillary plexus revealing the dilated telangiectatic perifoveal vessels. The B-Scan (**g**) displaying the dilated telangiectatic vessel with flow sign inside

In the context of Type 1 lesions, focal polypoidal changes of the neovascular tissue could be found [52]. The gold standard to detect polypoidal lesions is indocyanine green angiography where single or multiple focal nodular areas of hyperfluorescence are arising from the choroidal circulation with or without an associated branching vascular network. The ability to detect polypoidal lesions by OCTA is not well defined. In the majority of cases, OCTA is not able to show the lesion due to the low velocity of the blood flow inside and then the final appearance is an area of absence of signal (Fig. 6.18).

Type 3 neovascularization may originate from both circulations simultaneously as initial focal retinal proliferation and progression, or focal retinal proliferation with preexisting or simultaneous choroidal proliferation, or initial focal choroidal proliferation and progression [53]. Type 3 neovascularization can be visualized as a discrete

**Fig. 6.14** Multimodal imaging of Type 1 neovascularization. Early to late phase of fluorescein angiography (**a**) showing pinpoints of hyperfluorescence. Early to late phase of indocyanine green angiography (**b**) revealing central hyperfluorescent area corresponding to type 1 neovascularization. Optical coherence tomography angiography (**c**) displaying a well-defined neovascular network under the retinal pigment epithelium (**d**)

**Fig. 6.15** Multimodal imaging of Type 2 neovascularization. Early (A) phase of fluorescein angiography showing a well-defined neovascular network with leakage in the late phase (**b**). Early (**c**) and late (**d**) phases of indocyanine green angiography revealing the neovascular network. Optical coherence tomography angiography in the

3 × 3 mm (**e**) and 6 × 6 mm (**f**) showing the neovascular network with clearly visible and defined margins. Optical coherence tomography (**g**) displaying the detachment of retinal pigment epithelium with subretinal hyperreflective material and subretinal fluid

high flow linear structure extending from the middle retinal layers into the deep retina, which sometimes extent throughout the retinal pigment epithelium on OCTA. In Fig. 6.19 OCTA reveals a branching vessel anastomoses with the deep retinal capillary plexus and goes into the outer retina and eventually into the subretinal pigment epithelium space.

In contrast, geographic atrophy (GA) is a well-established end-stage manifestation of AMD [54, 15]. It results from the degeneration of photoreceptors, retinal pigment epithelium, and choriocapillaris. In this context OCTA is a very useful imaging modality to observe the presence of CNV at the peripheral border of atrophy. In fact, the ability to detect neovascularization with the standard angiographic examination could be challenging due to the alteration of retinal pigment epithelium and exposure of normal choroidal vessels. In this context, OCTA is a very useful technique that is able to show the neovascular network with the appropriate segmentation (Fig. 6.20). Moreover, OCTA was used to evaluate the status of choriocapillaris in patients with GA. A general loss of choriocapillaris flow associated with drusen and subretinal drusenoid deposits was found in OCTA.

**Fig. 6.16** Multimodal imaging of Type 2 neovascularization secondary to pathologic myopia. MultiColor imaging (**a**) and blue autofluorescence (**b**) showing fundus abnormalities related to pathologic myopia. Early phase (**c**) and late phase of fluorescein angiography (**d**) revealing the Type 2 neovascularization as an hyperfluorescent area that become

more intense with moderate leakage in the late phase (**d**). Early phase (**e**), late phase (**f**) of indocyanine green angiography and OCTA (**g**) displaying the neovascular network with well circumscribed appearance at the border of atrophy. OCT/OCTA B-Scan (**h**) showing the subretinal hyperreflective material corresponding to the neovascular lesion

**Fig. 6.17** Multimodal imaging of Type 2 neovascularization secondary to angioid streaks. Blue autofluorescence (**a**) revealing the fundus alterations secondary to angioid streaks. Indocyanine green angiography (**b**) and fluorescein angiography (**c**) showing an hyperfluorescent area that become more intense with moderate leakage in the late phase (**d**), as Type 2 neovascularization. Optical coherence tomography displaying the area of atrophy with the neovascular tissue above the retinal pigment epithelium without sub/intraretinal fluid (**e**). Optical coherence tomography angiography in the 6 × 6 mm (**f**) and 3 × 3 mm (**g**) showing a choroidal neovascularization with a defined network that closely follows the trajectory of the angioid streak—well appreciable on the *en face* optical coherence tomography (**h**)

**Fig. 6.18** Multimodal imaging of polypoidal neovascularization. Fundus autofluorescence (**a**) showing diffuse alteration of retinal pigment epithelium and areas of atrophy. Fluorescein angiography (**b**, **c**) revealing a diffuse hyperfluorescence and hypofluorescence points. Indocyanine green angiography in the different phases

(**d**–**g**) revealing the Type 1 neovascular network with hyperfluorescent polypoidal lesion. OCTA (**h**, **i**) showing the central neovascular network, as Type 1 neovascularization, one polypoidal lesion but not the polypoidal lesion in the temporal area

**Fig. 6.19** Multimodal imaging of Type 3 neovascularization. Fluorescein angiography (**a**, **b**) and indocyanine green angiography (**c**) revealing the neovascularizations (arrowheads) as two round hyperfluorescent points with leakage in the late phases. The two white lines indicate the exact location of the optical coherence tomography sections (**d**, **e**) showing the detachment of retinal pigment epithelium with intraretinal cystoid space. Optical coherence tomography angiography (**f**) revealing a tuft-shaped, high-flow lesion (open arrowheads) in the outer retinal layers abutting into the sub-retinal pigment epithelium space

**Fig. 6.20** Multimodal imaging of geographic atrophy. Fluorescein angiography (**a**, **b**) revealing the central area of atrophy as a hyperfluorescence area with staining. Indocyanine green angiography (**c**, **d**) showing well evidently the medium-large choroidal vessels under the atrophic area. *En face* optical coherence tomography (**e**) and optical coherence tomography angiography (**f**) showing

the area of atrophy with rarefied choriocapillaris and Sattler layer. Optical coherence tomography (**g**, **h**) displaying hypertransmission of the signal below the level of the retinal pigment epithelium and into the choroid resulting from loss of scatter or attenuation from overlying retinal pigment epithelium and neurosensory retina

**Fig. 6.20** (continued)

#### **6.5 Conclusion**

In this article the basic principles, major sources of artifacts and the clinical application of OCTA were discussed. The discussion of the clinical examples showed that OCTA provides diagnostic value in several vascular diseases of the eye. However, the current state of OCTA has not yet fully replaced the gold standard dye-based angiography because of important limitations.

Ongoing endeavors to improve OCTA are addressing these shortcomings.

This includes targeting greater fields of view, which is especially important in DR. Improvements to automatic segmentation algorithms in the context of pathological alterations are necessary for reliable results. Based on these future improvements it is expected that robust metrics and sensitive monitoring of disease progression can be achieved.

#### **References**

1. Spaide RF, Klancnik JM Jr, Cooney MJ. Retinal vascular layers imaged by fluorescein angiography and optical coherence tomography angiography. JAMA Ophthalmol. 2015;133(1):45–50.


noise-bias correction for optical coherence tomography of the retina. Biomed Opt Express. 2018;9(2):486.


related macular degeneration in remission. PLoS One. 2018;13(10):e0205513.


tion in optical coherence tomography volumes on a per A-scan basis using orthogonal scan patterns. Biomed Opt Express. 2012;3(6):1182–99.


G., Proposed international clinical diabetic retinopathy and diabetic macular edema disease severity scales. Ophthalmology. 2003;110(9):1677–82.


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

**7**

**OCT-Based Velocimetry for Blood Flow Quantification**

Boy Braaf, Maximilian G. O. Gräfe, Néstor Uribe-Patarroyo, Brett E. Bouma, Benjamin J. Vakoc, Johannes F. de Boer, Sabine Donner, and Julian Weichsel

#### **7.1 Introduction**

A complex network of blood vessels within the retina and the choroid ensures the perfusion of the photoreceptors, ganglion cells, retinal nerve fiber bundles and other retinal tissues essential for the visual system. Because various retinal diseases originate from pathologic changes in the local hemodynamics, localized blood flow measurements could potentially serve as a disease biomarker.

Fluorescence angiography is the standard clinical tool for qualitatively visualizing the structure of the blood vessel network in the back of the eye. Early detection of perfusion-related

M. G. O. Gräfe · J. F. de Boer Vrije Universiteit Amsterdam, HV, Amsterdam, The Netherlands

B. E. Bouma · B. J. Vakoc Wellman Center for Photomedicine, Harvard Medical School, Massachusetts General Hospital, Boston, MA, USA

Institute for Medical Engineering and Science, Massachusetts Institute of Technology, Cambridge, MA, USA

S. Donner · J. Weichsel (\*) Heidelberg Engineering GmbH, Heidelberg, Germany e-mail: Julian.Weichsel@HeidelbergEngineering.com diseases, monitoring disease progression, and evaluating the effectiveness of therapeutic intervention, however, could potentially be improved by robust methods for quantifying blood flow dynamics. Studies have shown the overall clinical potential of quantitative flow measurements in retinal diseases like AMD [1], glaucoma [1–4], diabetic retinopathy [5, 6] and others. Consequently, a number of different techniques have been developed for this purpose and the individual strengths and weaknesses of the various approaches have been discussed comprehensively in previously published review articles [1, 2, 7]. Despite progress, it remains difficult to reliably quantify blood velocities within individual blood vessels using these existing techniques. In addition, some methods report rather complex hemodynamic parameters, which are not easily interpreted and often dependent on the specific measurement method employed. Measurement uncertainty and variation have even led to contradicting study results in the published literature for the same disease and under otherwise similar conditions [7]. To date, no gold standard has been established for quantitative ophthalmic blood flow- and velocimetry.

Recently, OCTA became commercially available as a tool for qualitative, non-invasive, 3D visualization of the blood vessel network of the posterior eye. In research studies as well as in the daily clinical routine, OCTA is on the way to becoming an accepted imaging modality for the

B. Braaf · N. Uribe-Patarroyo

Wellman Center for Photomedicine, Harvard Medical School, Massachusetts General Hospital, Boston, MA, USA

<sup>©</sup> The Author(s) 2019 161 J. F. Bille (ed.), *High Resolution Imaging in Microscopy and Ophthalmology*, https://doi.org/10.1007/978-3-030-16638-0\_7

diagnosis of retinal vascular disorders including neovascularization [8, 9] and retinal vein occlusion [10]. Current clinical implementations of OCTA provide an almost binary discrimination between static tissue and blood vessels with sufficient flow, and a very limited ability for flow velocity quantification. This makes OCTA especially applicable in clinical scenarios when vessel perfusion drops out completely or when pathologic vasculature is growing into otherwise avascular regions of the retina. Statistical analysis of the OCTA signal mainly focuses on numerically evaluating the qualitative vascular signal and local features in the microvasculature geometry as well as the area of extended ischemic regions [11, 12] and avascular zones. Although these analyses are sometimes termed 'quantitative OCTA' in the literature, they are not based on actual quantitative OCT velocimetry measurements of blood flow. Instead the outcome of different established OCTA signal reconstruction algorithms is used in the analysis. As the visibility and detectability of blood flow can be highly dependent on the sensitivity of the specific OCTA algorithm [13, 14], these measurements are again not easily comparable between different devices. In order to arrive at a consistent uniform metric for quantitative blood flow measurement in the future, a robust, easy to use, and reliable tool for clinical flow quantification in the eye is in demand. OCT velocimetry holds the potential to meet these requirements using non-invasive measurements. The principle of OCT-based flow velocity measurements is straightforward; an incident OCT beam is scattered from moving erythrocytes within retinal vessels (Fig. 7.1) and by carefully controlling the spatial and temporal sampling of repeated measurements the underlying blood velocity can be derived from the modulation of the back-scattered signal.

This chapter describes the current status of quantitative OCT-based velocimetry methods for blood flow quantification. It builds upon the description of OCT and OCTA signal processing in Chaps. 3 and 6, respectively. In the first part of this chapter, the motivation and potential for OCT-based retinal flow measurements in clinical B. Braaf et al.

**Fig. 7.1** Schematic drawing of an OCT measurement on a blood vessel. The Doppler angle *α* is defined between the incoming OCT light beam and the direction of the flow velocity *V*. The incident light wave **Ψ***i* is partially reflected by the moving red blood cells into the scattered light wave **Ψ***s*. The blood cell velocity is derived by examining the modulation of the scattered light wave from repeated measurements in time. The figure is adopted from Braaf [15]

research and practice is discussed. The subsequent part of the chapter summarizes different methods for flow quantification. These methods are organized by the component of the scattered signal that is analyzed: phase-based methods, amplitude-based methods, and complex-signal based methods that simultaneously use both phase and amplitude.

#### **7.2 Clinical Potential for OCT-Based Retinal Blood Flow Measurements**

Measurements of retinal blood flow have been employed in ophthalmic clinical research over the last several decades in order to characterize hemodynamic regulation in healthy eyes and to assess its implications in eyes with retinal conditions [16]. A diverse toolkit of pioneering technical approaches has been used in these studies [7], including scanning laser Doppler flowmetry [17, 18] and tracing dye concentration dynamics over time with fluorescence angiography [19]. However, despite this tremendous investigative effort, no truly quantitative measurement of physical blood velocity or flow rate has yet made a successful and permanent transition into clinical practice. Instead, only qualitative hemodynamic features are routinely assessed [20].

To some extent, the hesitation towards clinical acceptance can be explained by limited usability of some modalities, which inevitably diminishes measurement precision and accuracy in the busy clinic. Additionally, some quantification techniques yield rather complex or only relative metrics that are mere proxies for the desired physical observables velocity (distance per time) and volumetric flow rate (volume per time). The interpretation of current measurements and the direct comparison between different methods is therefore difficult, especially considering the complexity of physiologic hemodynamics in the eye. Blood flow is auto-regulated in combination with several other factors such as blood pressure and tissue oxygenation, while it also varies depending on retinal location and over time. Therefore, disentangling the influence of all involved factors requires reliable, comparable, and localized flow measurements. Due to this complexity and the lack of accurate measurement tools many of these factors have not been systematically assessed in clinical research studies so far. Consequently, this has led to a significant portion of conflicting study results about the influence of hemodynamics on pathologies such as diabetic retinopathy [16] and glaucoma [2, 4].

Since its commercial introduction into clinical ophthalmology in 2014, OCTA imaging has had an impressive impact on research and daily clinical disease management (cf. Chap. 6). This is certainly due to its capability to noninvasively reveal the three dimensional (3D) volumetric structure of the retinal vascular network, while the measurement itself is relatively easily accessible, fast, and robust. However, to some extent the success of OCTA may also seem surprising as the interpretation of its resulting signal is not without problems. These issues include the impact of artefacts and also the qualitative signal characteristics of this technology. Without exception, every current commercially available implementation of clinical OCTA aims at reliably discriminating locations of motion from static tissue within the OCT volume scan. However, the OCTA signal offers essentially no resolution with respect to blood velocity. Therefore, current OCTA imaging is almost blind to subtle changes in blood flow that occur either entirely above or below the sensitivity threshold of the angiographic signal. This becomes quite obvious for instance, when considering the almost vanishing impact of the cardiac cycle on extended OCTA volumes. Several periods of diastole and systole are encoded into the sequentially acquired OCTA scans, but typically these variations cannot be differentiated in the resulting angiographic signal. The specific sensitivity threshold for the quasi-binary signal reconstruction task (i.e. dynamic vs. static), however, depends on the angiographic signal reconstruction algorithm employed and differs amongst the commercially available devices. Thus, regions of rather slow motion may be detected in one, while remaining unnoticed in another method, depending on their individual sensitivities to motion [21]. Nevertheless, there is increasing scientific and clinical interest in deriving statistical parameters, like vessel density and geometric features of capillary networks, from such qualitative OCTA datasets and relating these metrics to retinal pathologies. Due to the mentioned differences in the available devices and OCTA signal reconstruction techniques, such analyses will have to be cautiously interpreted in order to not produce apparently conflicting study results. Otherwise such results may, at least to some extent, undermine the emerging clinical acceptance and trust in this new imaging modality in the future.

In light of these apparent current limitations, supplementing qualitative OCTA imaging with additional quantitative OCT-based measurements in order to directly and reliably determine blood flow velocity and flow rate in absolute physical units seems a logical next step. Although it is unlikely that currently available clinical OCT hardware is capable of providing quantitative velocity measurements with the same field of view as qualitative OCTA implementations, localized quantification of blood velocity in a robust and easy-to-use OCT-based measurement has the potential to further support the interpretation of OCTA data. Such measurements could offer the potential to improve inter-device reproducibility, overall sensitivity, and the dynamic range of the OCTA signal. For true velocity measurements, the accuracy and precision of different devices could be directly assessed using independent validation techniques *in vivo* that may be more invasive or technically challenging, but can serve as an accepted goldstandard for initial validation [22, 23]. Given the additional ability of OCTA to accurately determine vascular geometry in 3D and therefore also the localized cross-section of individual vessels, the direct conversion of measured velocity to volumetric flow rate is readily available in this technique.

Although it is difficult to confidently predict the future importance of quantitative blood flow measurements in clinical ophthalmology, flow is generally considered to be an important variable in the characterization of the eye in health as well as at the onset and progression of disease:

**Healthy eyes**—Due to the 3D spatial resolution of OCT, flow measurements could, in principle, be ascribed to individual and distinct retinal capillary plexuses. Such networks have been defined histologically and can also be identified and discriminated in OCTA data. Even in healthy eyes, however, a large variability of blood velocity and flow measurements in the retina has been reported. This is partly due to the different technical methods that have been used for deriving the measurements, but also to contributions from confounding factors such as intra-ocular pressure, blood pressure, and the amount of oxygen supply. This variance makes a meaningful characterization of hemodynamics challenging and suggests that robust velocity measurements will require new instrumentation and possibly methods for controlling for extraneous physiological variation.

**Glaucoma**—Although a reduction of retinal blood flow in glaucoma patients has been observed, the precise role of vascular disturbances as a cause or result of the disease remains controversial [3, 4]. However, as intraocular pressure (IOP) is currently the only treatable risk factor in glaucoma, the potential for interventions in blood flow as an additional therapeutic option in the future is certainly compelling. Additionally, in some scenarios vascular irregularities may precede structural changes in the eye and could be used as a marker for early diagnosis and progression monitoring. These hypotheses are especially plausible in progressing cases of primary open angle glaucoma, where intra-ocular pressure has already been lowered therapeutically, or in normal tension glaucoma. Here, the disease state may be progressing despite having IOP in the normal range of 12–22 mmHg, which currently leaves no further viable options for therapeutic intervention.

**Diabetic retinopathy (DR)**—Earlier studies reported both increased as well as decreased retinal blood flow in early DR [5]. Since the clinical admission of OCTA devices, ischemic regions have been quantified and characterized based on their resulting OCTA signal [12]. As discussed above, the introduction of OCT-based quantitative physical flow measurements will ultimately help improve the device independent reproducibility of the derived parameters for ischemia. Additionally, the improved sensitivity and greater dynamic range holds the potential to reliably detect even subtle changes at the onset of the retinal disease.

**Age related macular degeneration (AMD)**—The observation of flow alterations in dye-based fluorescence angiography, especially in the choriocapillaris and choroid, has been associated with AMD [1]. As the choriocapillaris provides metabolic support for the retinal pigment epithelium (RPE), such alterations in blood supply could explain some of the observed pathologic manifestations of AMD such as drusen in its nonexudative form, as well as the development of choroidal neovascularization (CNV) in exudative AMD. Both the choriocapillaris and the choroid are located below the highly reflective RPE. Therefore, using OCT it will be more challenging to collect sufficient signal strength for reliably reconstructing blood velocity from below an intact RPE compared to the vascular networks located in the inner retina.

Provided that accurate and precise flow measurements can be achieved with OCT, we anticipate that this new capability has the potential to become a very useful tool that can characterize retinal hemodynamics and derive associated biomarkers for various diseases. Along that path, this new clinical technique could help to resolve apparent conflicting results that emerged in previous studies using other measurement approaches. Additional potential lies in combining quantitative flow measurements with oximetry data in order to build a comprehensive and coherent understanding of the hemodynamic regulatory system in the future.

#### **7.3 Measuring Blood Flow with OCT**

#### **7.3.1 Phase-Based Methods**

In this section, we discuss OCT flow quantification methods that are based on the detection of signal phase or frequency changes that occur between successive OCT measurements on moving scattering particles. These methods are published in the literature under a variety of names including 'Doppler OCT', 'phase-resolved OCT' and 'joint spectral and time domain OCT'.

#### **7.3.1.1 Theory**

In OCT, frequency changes from axial scattering particle motion, parallel or antiparallel to the propagation direction of the illumination light, are observed within the interference term of the detector signal. Under the simplifying assumption of a single scattering particle, reflecting light at the axial position *zS* in the OCT interferometer sample arm and a fixed interferometer reference arm length *zR*, the resulting interference signal at the detector will show the dependency:

$$I\_{\rm det}\left(k\right) \propto \cos\left(\underbrace{2\left(z\_S - z\_R\right)k}\_{a\_\ell}\right). \tag{1}$$

Hence, the oscillatory intensity signal *Idet*(k) is a function of the wavenumber *k* with modulation frequency *ωs*. Axial motion of a scattering particle will change *zS* and therefore *ωs* in repeated acquisitions, accordingly. This change in the modulation frequency due to axial motion can be observed in and extracted from the OCT signal [24].

Standard practice in Fourier-domain OCT (FD-OCT) processing is the transformation of the *k*-dependent intensity signal into a depth dependent (*zS*-dependent) complex signal via Fourier transformation. The (small) change in the axial position of moving scattering particles changes *ωs* in Eq. (1) and manifests in a phase change in the transformed complex signal from sequential measurements [25]:

$$
\Delta\varphi = \frac{4\pi n \pi V \cos \alpha}{\lambda\_0},\tag{2}
$$

with *n* as the refractive index of the medium, *τ* as the time difference between repeated measurements, *V* as the scattering particle velocity, *λ*0 as the central wavelength of the OCT light source and Doppler angle *α* as the angle between the light propagation and the flow velocity directions as illustrated in Fig. 7.1. The Doppler frequency shift associated with the particle motion can be calculated from the phase change ∆*φ* of Eq. (2) as:

$$f\_D = \frac{\Delta\phi}{2\pi\tau} = \frac{2nV\cos\alpha}{\lambda\_0}.\tag{3}$$

Equations (2) and (3) indicate that the Doppler frequency shift is proportional to the projected *axial* component of the velocity *vz* along the incident beam direction, *vz* = *V* cos *α*.Consequently, only when the Doppler angle *α* is known, the actual flow velocity *V* be calculated [26].

The improvement in acquisition speed introduced by FD-OCT enabled the *in vivo* volumetric measurement of axial velocities in the retina [27– 30]. The OCT phase change ∆*φ* measured from within individual blood vessels as shown in Fig. 7.2a visualizes both the direction and the magnitude of the axial velocity component of blood flow that is either parallel or antiparallel to the incident OCT beam. In addition the dynamics of the cardiac cycle in arteries and veins can be observed (Fig. 7.2b).

The dynamic range of this measurement is defined as the ratio of the maximum and minimum detectable signal. In Doppler OCT the maximum signal is given by the velocity at which the phase

**Fig. 7.2** (**a**) Single frame from a movie showing the repeated acquisition of structural (top) and bi-directional axial blood flow (bottom) B-scans in the retina. Delineated regions in the flow image indicate: a—artery, v—vein, c—capillary, d—choroidal vessel. (**b**) Integrated flow sig-

nal over time for an artery and a vein. The dynamics of the cardiac cycle are visible as a temporal pulsation. Reprinted from White et al. [28], with permission of The Optical Society (OSA)

difference is still uniquely identified. The phase is cyclic and restricted to the range [−*π*,+*π*) radian. Flow velocities that correspond to larger phase differences outside this range are wrapped back into the original interval and cannot be uniquely identified anymore. Although phase unwrapping algorithms can be used to extend the velocity range, in practice these methods are often difficult to implement in a sufficiently robust way [31, 32]. Without phase unwrapping, the maximum detectable velocity is associated to a phase difference of ±π and given by:

$$\nu\_{z,\text{max}} = \pm \frac{\mathcal{A}\_0}{4n\pi \cos \alpha}.\tag{4}$$

The minimum detectable signal is determined by the noise level of the measurement. The overall phase noise has two main contributions: shot noise and positioning error. Shot noise is determined by the optical power in the interferometer reference arm [33]. Regarding positioning error, in order to measure the phase shift or Doppler frequency each sample position has to be measured at least twice. A beam displacement between these two sequential scans causes a variation in the ensemble of illuminated scatterers. This rigid bulk sample displacement, for instance caused by whole sample motion or inaccurate beam positioning, can in general be decomposed into an axial and a lateral component. Bulk displacement can be compensated in post-processing to some extent only and any remaining positioning error *Δx*, relative to the beam spot diameter, leads to an additional contribution to the overall phase noise [28, 33, 34]. Blood flow perpendicular to the beam direction can also be seen as a localized lateral displacement of the measured sample and therefore contributes to or even dominates this noise term for significant lateral flow. The two phase noise contributions are statistically independent and their combined impact on the overall standard deviation *σn* of the phase shift statistics can be expressed as [33, 34]:

$$
\sigma\_{\kappa} = \sqrt{\sigma\_{\text{SNR}}^2 + \sigma\_{\text{\text{\textdegree}}}^2},
\tag{5}
$$

where *σSNR* describes the shot noise contribution and *σ*Δ*<sup>x</sup>* describes the positioning error contribution to the phase noise respectively.

A schematic illustration of possible Doppler phase shift measurements for different (axial) flow velocities is shown in Fig. 7.3. The signal from static tissue forms the background noise floor from which flow signals are differentiated, and is denoted here as the gray phase noise probability density function *p* with standard deviation *σn*. For flow signals, *p* gets phase shifted proportional to the mean axial flow velocity parallel to

**Fig. 7.3** Schematic illustration of the probability density functions for Doppler phase shifts measured for static tissue (gray) and for three different (axial) flow velocities within the phase range [−*π*,+*π*). The static tissue background noise has a probability density function *p* with

standard deviation *σn*. Flow measurements sampled from distribution *θ***1** (green) cannot be easily distinguished from noise, *θ***2** (red) can be measured at its true value, while *θ***<sup>3</sup>** (blue) is affected by phase wrapping and therefore prone to errors

the incident OCT beam (Eq. 2) and its width is further broadened by the lateral flow component perpendicular to the beam. Three possible flow signals are indicated in Fig. 7.3 by distributions centered on an average phase shift (*θj*) (*j* = 1, 2, 3). A weak flow signal (*θ*1 in green) can overlap significantly with the static tissue background noise. In this case the flow signal cannot be easily distinguished from the background. On the other hand, a significantly strong flow (*θ*2 in red) can be completely differentiated from the background noise. As long as this probability density function is still within the [−*π*,+*π*) range, the velocity can be correctly quantified. If the velocity is too high (*θ*3 in blue) the distribution can shift partially or fully outside the [−*π*,+*π*) range. In this case, exceeding phase shift values are wrapped across the periodic interval boundaries and appear on the opposite side of the range (*θ*<sup>3</sup> ′ ). This makes the flow quantification prone to errors, since the flow velocities can appear higher or lower or even change in directionality.

#### **7.3.1.2 Application to Retinal Imaging**

As shown in Eq. (2), the unambiguous reconstruction of the velocity of moving scattering particles requires additional knowledge of the direction of their motion as given by the Doppler angle *α*. In the following paragraphs, four quantitative Doppler flow methods are presented that calculate the true flow velocity from the axial velocity component by either estimating *α*, or by an *α*-independent Doppler signal acquisition and analysis. Afterwards, an alternative Doppler method is discussed that includes a more detailed analysis of the Doppler frequency distribution to separately determine the axial and lateral velocity components. This method might be especially interesting for blood vessels at close to perpendicular orientation to the OCT incident light—a typical scenario in retinal applications—since it is able to measure the lateral flow component.

#### **Circumpapillary Scan**

As OCT is a volumetric imaging modality, under certain conditions it is possible to reconstruct blood vessel orientations (corresponding to the Doppler angle *α*) in three spatial dimensions directly from the OCT dataset. This can be done most reliably for large vessels with a significant axial flow component, such that the Doppler signal is clearly above the level of the residual phase noise. These conditions are most reliably met within the peripapillary region where the central retinal artery and vein are located. Wang et al. [35] demonstrated a measurement based on multiple circular scans of the circumpapillary region

**Fig. 7.4** (**a**) Illustration of the path of two circular scans with different radii around the optic nerve head. (**b**) Doppler OCT image with grayscale display of the Doppler phase shifts. The image extends from 0° to 360° around

at two different radii (Fig. 7.4a). The difference of the radii is chosen small enough that the vessel segments recorded on those circles can be assumed straight across this distance. An example of the measured Doppler phase shifts in one circular scan is illustrated in Fig. 7.4b. Based on their change in axial location between the OCT images obtained at both radii and the difference in the radii of both scans, an estimate of the Doppler angle *α* can be calculated. Together with the directly measured axial velocity component *vz*, the true velocity can be determined. In addition, if the surface perpendicular to the flow velocity is also calculated, the flow velocities can also be converted into flow rates within the cross sections of the vessels.

#### **En Face Plane Doppler OCT**

Quantitative Doppler flow measurements from cross-sectional OCT scans have the disadvantage that they require the explicit calculation of the Doppler angle. Alternatively, quantitative Doppler flow calculations can be performed on three-dimensional OCT datasets without the need for explicit angle calculations. While the measured axial velocity component is scaled by the cosine of the Doppler angle **cos***α*, the crosssectional area of a blood vessel in an en face image plane (the lateral plane) will scale inversely due to its angled orientation by **1**/|**cos***α*| [36]. These two effects therefore cancel when integration is performed in the en face plane of the three-dimensional OCT dataset and quantitative flow information can therefore be obtained without explicit knowledge on the Doppler angle.

the optic nerve head. The dark and bright spots in the retina indicate the flow in opposite directions within the large retinal vessels. Reprinted from Wang et al. [35]

#### **Multiple Beam Doppler OCT**

As large vessels branch out and extend from the optic nerve head through the retina, their diameter becomes smaller and the vessel orientation is harder to extract from OCT measurements. Hence several methods have been developed that do not require vessel orientation, but are able to reveal flow velocities in each individual voxel of an OCT scan directly. A prominent example is the detection and/or illumination with multiple beams, as for instance three-beam illumination [37].

In multi-beam methods a fixed angular separation is used between the illumination and detection beams. This type of beam configuration ensures that one or multiple beams will have a significant Doppler angle with the target blood vessel, and problems with perpendicularly oriented vessels are therefore mitigated. The undilated pupil typically limits the lateral displacement of the illumination and detection beams to 2 mm on the cornea and consequently restricts the angular beam separation on the retinal surface to be within ±2.4° from the perpendicular. Although these Doppler angles are relatively small, together with proper calibration, accurate Doppler velocity and flow measurements can be obtained from the axial velocity components and the known angular differences between the beams. The primary disadvantage of this approach is that a complex apparatus is required to generate the multiple beams and this complexity may not be suitable for clinical instruments.

The three-beam illumination method was used to characterize the dependency between vessel diameter and flow velocity for arteries and veins which showed a linear dependency (Fig. 7.5) [37]. It was concluded from this study that the retinal

**Fig. 7.5** Mean velocity versus vessel diameter as measured with three-beam Doppler OCT. Left: venous velocity. Right: arterial velocity. Filled area: 95% confidence interval of the fit. The results show a linear relation

between the vessel diameter and the flow velocity. Reprinted from Haindl et al. [37], with permission of The Optical society (OSA)

blood vessels seem to be configured to slow down red blood cells when they approach the capillaries where oxygen is exchanged with the tissue.

#### **Digital Filtering in Full Field OCT**

Following the idea of multiple illumination and detection directions in order to mitigate the dependence of the phase shift measurement on the Doppler angle, another approach based on Fourier Optics has been developed [38]. This approach uses the concept that the light field in one focal plane of an ideal focusing lens and the light field in the opposite focal plane are Fourier pairs. Using this concept, it is possible to perform a two dimensional Fourier analysis of an OCT volume that provides, in post processing, digital reconstructions of OCT volumes as if illuminated and detected by multiple different and linearly independent beams. This approach, however, relies on phase stable OCT volumes that have to be acquired at sufficient speed to avoid major artifacts from sample motion *in vivo*. This is not achievable with currently available hardware for point scanning OCT devices, where each lateral position is acquired sequentially. Rather the parallelized approach of full-field swept-source OCT (FF-SS-OCT) needs to be employed [39, 40]. Instead of a rapid sweep over the light source bandwidth and a data acquisition of one A-line location per sweep, the full field of view is captured with a high framerate 2D camera at a series of wavelengths while slowly sweeping the light source, resulting in an effective A-line rate of about 39 MHz [39, 40]. The high volume acquisition rate and the phase stability over the full volume are necessary for the successful acquisition of phase stable OCT volumes using FF-SS-OCT. Unfortunately the use of a spatially coherent light source in such instruments greatly increases the detection of multiply scattered light, limiting the usable image depth. In addition, due to the slow wavelength sweep, patient motion that would not affect point scanning clinical systems can degrade the FF-SS-OCT image quality [39, 41, 42].

Nevertheless, an experimental realization of this method was demonstrated for *in vivo* retinal flow measurements to create velocity maps and for tracking the thermal expansion of excised porcine retinas [40] (Fig. 7.6). Once repeated volumetric scans were acquired, the illumination directions could be chosen freely in postprocessing within the range of acquired wave vectors. As seen in Spahr et al. [43], retinal vessels produce artifacts due to the slow wavelength sweep and multiply scattered photons, which suggests that the applicable velocity range of this technique is limited to a smaller velocity range than the techniques discussed earlier.

**Fig. 7.6** (**a**) Structural image of ten averaged B-scans from an FF-SS-OCT system. The slow wavelength sweep and multiple scattered photons from large blood vessels create artifacts in the image [43]. (**b**) The same setup was also used for flow velocity measurements in retinal ves-

sels at two different time points at 535 ms (upper) and 1340 ms (lower). It is possible to differentiate between fast and slow flows, and to observe cardiac cycle pulsation over time. Reprinted from Spahr et al. [43], © Georg Thieme Verlag KG

#### **Analysis of the Doppler Frequency Bandwidth**

All previously introduced methods for phasebased flow quantification measured a Doppler signal that is for the greater part related to the axial velocity. If the axial velocity is very small due to a nearly perpendicularly oriented vessel, the required measurement accuracy of the Doppler angle α to extract a meaningful flow velocity becomes prohibitive. In the human retina this aspect becomes extraordinarily challenging for blood vessels far away from the optic nerve head that are oriented along the retinal surface. These vessels typically have a Doppler angle that is almost perpendicular to the incident beam direction. To overcome this limitation, a different approach can be employed that utilizes the effect of lateral sample motion to the phase or frequency of the OCT signal.

The OCT phase shift and corresponding Doppler frequency resulting from scattering particle motion are random variables for which the means are given in Eqs. (2) and (3), respectively. Motion, however, not only impacts the mean of the corresponding statistical distribution of these measurements, but also by its width. Local lateral and axial blood flow components broaden the distributions (cf. Eq. 5). Bouwens et al. [44] quantified this effect in detail and derived relations between the mean and the variance of the Doppler frequency distribution for different modes of the illumination beam and detected light for low to high numerical apertures. For ophthalmic systems, Gaussian beam profiles and low numerical apertures can be assumed. The mean of the frequency distribution is then expressed by Eq. (3) (with *vz* = *V* cos *α*) and the variance is given as:

$$
\sigma^2 = \frac{\boldsymbol{n}^2 \boldsymbol{k}\_0^2 \boldsymbol{\nu}\_0^2}{4\pi^2 \boldsymbol{f}^2} \boldsymbol{\nu}\_r^2 + \frac{1}{\pi^2 \boldsymbol{l}\_c^2} \boldsymbol{\nu}\_z^2. \tag{6}
$$

In Eq. (6) *k*0 is the central wave number, *v v v T x <sup>y</sup>* 2 2 <sup>2</sup> = + , *w*0 is the beam diameter at the entrance pupil, and *f* is the focal length. The parameter *lc* = 1/(*nkσ*) is the coherence length with the spectral bandwidth *kσ*. It can be appreciated from Eq. (6) that the Doppler frequency bandwidth will be broadened by the influence of lateral as well as axial motion [44, 45].

The above mentioned characteristics made this technique easier to use in situations where samples can be fixed, e.g. as in optical coherence microscopy, but the suitability for retinal measurements has so far not been established. The first demonstrations to determine axial and lateral velocities were done in phantom measurements (Fig. 7.7, here only lateral velocities shown). A

**Fig. 7.7** Depth profiles at the center of a capillary (α = 81° ) measured from the Doppler frequency bandwidth with extended focus OCM (xfOCM). (**a**) The lateral flow component measured for different flow rates. Parabolae are fitted to the measurements, assuming *V* = 0 at the capillary wall. (**b**) The maxima of the parabolae are

extracted and compared with the expected velocity as set by the syringe pump under laminar flow conditions. Error bars represent standard deviation over ten measurements. Reprinted from Bouwens et al. [44], with permission of The Optical Society (OSA)

glass capillary tube was connected to a syringe pump. To mimic the properties of a scattering fluid a solution of polystyrene beads was pumped through the capillary with a constant flow rate. Parabolic profiles were fitted to the measured velocities (a) and the derived maximal velocities were plotted against set velocities by the syringe pump (b). The results show a close representation of the expected values.

#### **7.3.2 Amplitude Based Flow Quantification**

Conventional Doppler OCT techniques as discussed above rely on the OCT signal phase for flow velocity quantification. These methods have proven accurate under the conditions that the Doppler angle is known and significantly away from perpendicularity, and that the lateral component is small enough to enable meaningful phase differences to be measured. It can, however, be challenging to fulfill these requirements *in vivo*, not only because the OCT system has to be configured to achieve phase stability, but most importantly because flow velocities in retinal vessels away from the optic nerve are dominated by the lateral velocity component. Amplitude-based methods try to address these drawbacks by analyzing the fluctuations of the OCT signal in response to the finite transit time of the scattering particles as the flow passes through the illumination beam.

OCT is a coherent technique and as such it is inherently subject to speckle. This phenomenon arises from the coherent superposition of light scattered from multiple points within a sample in a similar fashion to speckle observed in ultrasound images. The OCT signal at a given location in the sample is a complex value *F* (the OCT complex amplitude), and is the result of the superposition of phasors due to the signal from each scattering particle inside the OCT resolution volume. Assuming a random distribution of scatterers, the statistics of the speckle amplitude are described by a random process. As particles move due to their flow velocity, a subset of scatterers moves out of the OCT probing volume, a new subset moves into it and a fraction remains. This continuous process produces a change in the coherent phasor sum as a function of time, and therefore the complex value of the tomogram at a given location.

The squared magnitude of the phasor sum is the OCT intensity *I*, which is generally displayed on a logarithmic (or another nonlinear) scale to show the structural OCT image. The intensity fluctuations also follow specific statistics linked to the fluctuations of *F*, and carry information about particle flow as well. The square root of *I*, the OCT amplitude, also presents a similar relationship. For this reason we refer to all these techniques as amplitude-based velocimetry.

In general, the faster the particles are replaced inside the OCT probing volume due to their flow speed, the faster the complex amplitude and intensity signals will fluctuate. Amplitude-based velocimetry links the dynamics of these fluctuations to the flow velocity, as we will see in the following sections. It is important to note that amplitude velocimetry techniques in OCT are less mature than phase-based methods, in part due to challenges in the correct interpretation and analysis of the data, as well as due to different, although potentially simpler, implementation at the hardware level. We start this section considering the statistics of the fluctuations of the complex amplitude, and later we continue with focusing on intensity fluctuations.

#### **7.3.2.1 Complex Amplitude: Dynamic Light Scattering Optical Coherence Tomography**

Dynamic light scattering optical coherence tomography (DLS-OCT) provides a comprehensive model for the OCT complex-valued signal and its temporal evolution in the presence of moving scatterers, like red blood cells in vascular flow. DLS-OCT represents a well-validated link between the OCT signal and moving scatterers in a single scattering regime.

The first-order autocorrelation function of the complex-valued OCT signal is calculated as [46–50]

$$\mathbf{g}^{(l)}(\hat{z},\tau) = \left\langle F(\hat{z},\tau)F^\*(\hat{z},0) \right\rangle,\qquad(7)$$

where *F* represents the complex backscattering signal of the sample at depth *z*ˆ , 〈…〉 represents an ensemble average, and <sup>∗</sup> the complex conjugate. *F* is given by the Fourier transform of the fringe signal [51]. We define the axial propagation direction of the beam as *z*, and the lateral plane dimensions as *x* and *y*, and the flow velocity vector as (*vx*, *vy*, *vz*). The autocorrelation of a signal compares its amplitude at variable time differences *τ* and thus provides information on the typical time scale at which previous signal values still influence the present signal. Equation (7) is customarily normalized by a factor ( ) 2 1/ ,0 < *F z*ˆ > , such that when the signal is perfectly correlated *g(1)* = 1, and when the signal is totally decorrelated *g(1)* = 0. The normalized *g(1)* is approximately given by [51]

$$\begin{split} \mathbf{g}^{(1)}\left(\tau\right) &= \exp\left[-iF\_{D}\left(\nu\_{z}\right)\tau\right] \exp\left[-F\_{\mathcal{B}}\left(D\right)\tau\right] \exp\left[-F\_{G}\left(\frac{\partial \nu\_{z}}{\partial x}, \frac{\partial \nu\_{z}}{\partial y}, \frac{\partial \nu\_{z}}{\partial z}\right)\tau^{2}\right] \times\\ & \qquad \exp\left[-F\_{\mathcal{x}}\left(\nu\_{z}^{2}\right)\tau^{2}\right] \exp\left[-F\_{\mathcal{Y}}\left(\nu\_{y}^{2}\right)\tau^{2}\right] \exp\left[-F\_{\mathcal{Z}}\left(\nu\_{z}^{2}\right)\tau^{2}\right], \end{split} \tag{8}$$

where all *F*() functions are positive and define different contributions to the decay of the correlation of the signal with *τ* [50]. *FD* is related to the familiar Doppler term that has been discussed before in Eqs. (2) and (3) and is part of the only imaginary exponent, *FB* a Brownian motion term with diffusion constant *D*, *FG* a gradient term, and *Fx*, *Fy*, *Fz* velocity terms [51]. Terms that contribute comparatively little to the decorrelation have been omitted here [50]. Under normal flow conditions, also the Brownian motion contribution can be ignored, and gradient and velocity terms dominate.

Figure 7.8a shows a representative *g(1)(*τ*)* calculated from an OCT flow signal. The amplitude is shown in grey, exhibiting a Gaussian decay with *τ*. If an isotropic resolution volume is used and the flow velocity is mostly perpendicular to the OCT beam, the amplitude decay is insensitive to the flow direction and directly related to the total flow speed. Gradient effects play a role when axial flow is present and destroy the direct relationship between decay and total flow speed. The phase, in purple, shows a linear decrease with *τ* (with phase wrapping effects), and its evolution is directly linked to the axial velocity of

**Fig. 7.8** (**a**) Exemplary DLS-OCT curve from an OCT signal. The amplitude (grey solid) and the phase (purple dashed) as a function of time difference. The upper pair represents a sample at moderate flow while the lower pair corresponds to fast flow. (**b**) The Fourier pair of *g(1)(τ)* shows the Doppler peak as generally visualized in Doppler OCT. The location of the peak is given by the phase slope in *g(1)(τ)*, while its width is related to the *g(1)(τ)* decay

the scatterers. Not only the phase information is directional, but also a change in the sign of the axial velocity reverses the sign of the phase evolution.

*g(1)(τ)* contains a wealth of information about scatterer motion and provides insights into the behavior of not only complex-amplitude velocimetry, but also phase methods. In Fig. 7.8b, inverse Fourier transformation of *g(1)(τ)* provides the power spectrum of the OCT signal. The location of the peak (what is generally called the Doppler peak) is given by the phase slope in *g(1)(τ)*. The broadening of the peak, which is measured in Doppler variance, is dependent on all the contributions to the decorrelation in Eq. (8), including lateral flow and axial gradients. This explains the sensitivity of Doppler variance to lateral flow, and makes evident the need to also incorporate gradient effects into the Doppler variance framework for accurate velocimetry.

In practice, flow data is acquired by recording B-scans with a slow lateral scanning or by recording M-mode scans without lateral scanning at multiple locations. The complex-valued signal is then analyzed to calculate *g(1)(τ)*, which is either used in a fit to determine the decay coefficients in Eq. (8) or analyzed to find a decorrelation rate. The decorrelation rate is then displayed in a twodimensional cross-sectional speckle decorrelation map (see Fig. 7.9a), with a corresponding structural image given by the intensity (see Fig. 7.9b). The decorrelation map can then be transformed into a flow speed map using Eq. (8) under many practical conditions.

#### **7.3.2.2 Intensity: Speckle Decorrelation**

Making use of the Siegert relationship, which states that under certain conditions *g*(2) = 1 + |*g*(1)| 2 [50], we can transform Eq. (8) into the second order autocorrelation function for the OCT signal intensity *I* defined as,

$$\mathbf{g}^{(2)}\left(\tau\right) = \frac{\left}{\left\left},\tag{9}$$

which describes the so-called speckledecorrelation approach, also known as intensity-based DLS-OCT (iDLS-OCT). In this technique, only the fluctuations in intensity of the OCT signal (as shown in Fig. 7.9b) are analyzed using an autocorrelation approach, and the decorrelation rate is calculated. The main advantage of this technique is the fact that most

**Fig. 7.9** Representative DLS-OCT measurement in a flow phantom setup. (**a**) Decorrelation analysis of the complexamplitude tomogram. (**b**) Intensity tomogram acquired by a slow lateral scan

clinical wavelength-swept source OCT systems are not phase stable, and therefore cannot be used for acquiring DLS-OCT data. In contrast, phase instability does not affect the decorrelation of the intensity signal, and therefore iDLS-OCT data can be acquired with any OCT system. Similarly to the *g(1)(τ)* decay, there is no directionality information in speckle decorrelation. With some transformations to account for Brownian motion and flow gradients, the decorrelation time is approximately inversely proportional to the flow speed when no significant axial component of the flow is present, which is precisely the case for retinal flow away from the optic nerve head [52, 53].

There are important practical considerations regarding the use of amplitude-based methods. Because the quantification lies in the decay of the autocorrelation function, such decay has to be properly sampled. This means that the temporal resolution of the OCT system, corresponding to the revisiting time of the scanning paradigm, has to resolve the decorrelation rate of the flow and other system parameters. If there is insufficient time sampling, we move into the realm of traditional OCT angiography techniques which do not sample the decay in case of flow, but only distinguish between moving particles and static tissue in an effectively binary way.

A different take on the amplitude decorrelation techniques consists of analyzing the properties of the members of the ensemble in Eqs. (8) and (9) instead of taking the ensemble average. This is in essence the principal component analysis (PCA) presented by Mohan et al. [54] Experiments on phantoms showed the potential for flow velocity quantification of this method with a reduced number of data points. In addition, analysis of *in vivo* blood flow in a mouse ear showed good agreement with conventional OCT Doppler measurements. These properties make this method attractive for quantitative retinal flow imaging, but further studies are needed to demonstrate its applicability.

DLS-OCT and its derivatives have significant potential to provide detailed information in retinal blood flow. However, most validations have been carried out in conditions that are far from those present in retinal flow, such as using solutions where single scattering is dominant, or using solid phantoms. Although there are publications with in vivo cerebral blood flow results, no *in vivo* retinal flow has been demonstrated yet. The most attractive property of DLS-OCT for ophthalmic imaging is the potential for blood flow quantification with a single imaging beam, without knowledge of the geometry or corrections for Doppler angle.

#### **7.3.2.3 Alternative Speckle Decorrelation Methods**

In addition to the widely used speckle decorrelation method described above, several noteworthy alternative methods have been developed with potential clinical use. These methods sit between fully quantitative decorrelation methods and OCT angiography: their time sampling is typically too sparse to sample the decay of large retinal vessels, but sufficient to give a coarse degree of quantification for smaller vessels.

The split-spectrum amplitude decorrelation angiography (SSADA) algorithm [55] is similar to iDLS-OCT but uses the square root of the intensity signal and a modified normalization constant, and the axial resolution is reduced via spectrum splitting to perform averaging. Due to its relationship with the second-order autocorrelation function (Eq. 9), theoretically it has the potential for quantification of flow. However, all *in vivo* retinal measurements so far have had insufficient temporal resolution to obtain meaningful quantification. Tokayer et al. [56] showed that quantification could be achieved by increasing the time sampling by analyzing M-mode scans with a static beam, which were performed on a blood infused capillary sample.

The assessment of OCT intensity decorrelation at different time intervals also forms the basis for the variable interscan time analysis (VISTA) method of Choi et al. [57]. In this method OCTA intensity decorrelation images are obtained at two time intervals with the longer interval (3.0 ms) at double the rescan time of the short interval (1.5 ms). This is in essence an autocorrelation function with only two sampled time differences *τ*. This limited information is used to determine fast flows (those decorrelated at both time intervals) and a range of slow flows (those with different degree of decorrelation at both time intervals). This range is mapped to a color scale which provides information to qualitatively assess flow velocity differences between (capillary) vessels [58]. An example of VISTA flow imaging is shown in Fig. 7.10 for a patient with polypoidal choroidal vasculopathy (PCV). Figure 7.10a, d show ICG images where the polyp could be clearly visualized with the bright periphery of the polyp indicating increased flow towards the polyp wall (arrow). The branching vascular network (BVN, marked by the dashed line) is visible but obscured by the fluorescence of vessels at different axial depths. In OCTA projections from different depths (Fig. 7.10b, e) it can be appreciated that the polyp is located in a more superficial layer than the choroidal neovascularization. Figure 7.10c, f show VISTA images from the same depth as the OCTA projections. In Fig. 7.10b as well as Fig. 7.10c, it can be seen that blood flow in the outer part of the polyp is faster compared to its central part, which is in good agreement with the ICGA finding (inset Fig. 7.10d). The BVN (see inset Fig. 7.10f) presents relative slow flow compared to the regular retinal vessels. Although VISTA images could be a good qualitative aid in assessing retinal flow properties, the unknown temporal relation of the VISTA decorrelation signals and their limited time sampling currently limits the extension of this method into a full quantitative flow measurement.

#### **7.4 Discussion and Conclusion**

Noninvasive quantitative blood flow measurements in the human eye hold the potential to provide new clinically relevant insights into retinal hemodynamics in health and disease. For its future integration into clinical practice, however, a robust, fast, and reliable measurement will be required. Typical currently employed approaches for OCT-based velocimetry rely on analyzing modulations in either or both of the phase and amplitude of the complex OCT signal that is backscattered from moving red blood cells. While recent research yielded significant progress in each individual approach, all currently still also hold limitations and difficulties to be overcome in the future. Therefore, any of the methods may be superior in certain application scenarios and for imaging different locations of the retina, and a combination of different methods may be most feasible for providing a truly versatile clinical measurement technique.

**Fig. 7.10** Figure adopted from Rebhun et al. [58, 59]. Polypoidal choroidal vasculopathy (PCV) on VISTA-OCTA; red indicates faster blood flow speeds, and blue indicates slower speeds. Left eye of a 61-year-old woman with PCV. (**a**) ICGA showing the branching vascular network (BVN) and polypoidal lesion. (**d**) Larger scale of the macular area documented by ICGA in (**a**) (white dashed line), with the BVN (white dotted line) and a polyp with

Current clinically employed OCT technology does not yet offer the required temporal repetition rate and overall acquisition speed for repeated scans over extended retinal regions that would allow feasible quantitative velocimetry over comparable field of views that are typically employed in structural OCT and qualitative OCTA measurements. Independent of the specific analysis approach, the same moving blood cell has to be imaged repeatedly over multiple measurements, which dictates the minimum scan repetition rate as well as the minimum number of repeated scans at an individual vessel location for fastest and slowest flow, respectively. The physiological change in flow rate across the heart cycle

bright periphery toward the polyp wall and dark center (white arrow). (**b**, **c**, **e**, **f**) OCTA images of the same eye, but projected from different axial depths (different segmentation levels) capturing different components of the lesion. (**b**, **c**) Clearly shows the polyp with blood flow toward the polyp wall, but not in the center. (**e**, **f**) Clearly shows the BVN with relatively fast flow (see **f**). (**c**, **f**) are OCTA scans applying VISTA-OCTA

further complicates the situation and may additionally require imaging of individual vessels over more than a second to provide a comprehensive signal. Nevertheless, localized measurements of flow within a number of individually selected vessels, for instance around the optic nerve head, may already provide clinically useful information on current clinical device hardware. Future generations of OCT devices will certainly help mitigating these bottlenecks. Ultimately, the combination of quantitative retinal blood flow with oximetry measurements may contribute another important puzzle piece for the comprehensive understanding of the role of dysfunctional metabolic supply in retinal disease.

#### **References**


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

**8**

# **In Vivo FF-SS-OCT Optical Imaging of Physiological Responses to Photostimulation of Human Photoreceptor Cells**

Dierck Hillmann, Clara Pfäffle, Hendrik Spahr, Helge Sudkamp, Gesa Franke, and Gereon Hüttmann

#### **8.1 Introduction**

The human vision system achieves an extraordinary level of performance, which is only recently matched by technical systems. Three components contribute to our vision capabilities: the optics of the eye, the light detection by the photoreceptor cells (PRCs), and the processing of the visual information in the neuronal structures of the retina and the brain. Diseases are known to interfere with physiological function of the vision system on all three levels: They may result in failing to

D. Hillmann

Thorlabs GmbH, Lübeck, Germany

Institute of Biomedical Optics, University of Lübeck, Lübeck, Germany

C. Pfäffle · H. Spahr · G. Franke Institute of Biomedical Optics, University of Lübeck, Lübeck, Germany

H. Sudkamp Institute of Biomedical Optics, University of Lübeck, Lübeck, Germany

Medical Laser Center Lübeck GmbH, Lübeck, Germany

G. Hüttmann (\*) Institute of Biomedical Optics, University of Lübeck, Lübeck, Germany

Medical Laser Center Lübeck GmbH, Lübeck, Germany

Airway Research Center North (ARCN), Member of the German Center of Lung Research (DZL), Gießen, Germany

form a high-quality image on the retina, loss of the photoreceptor cell function, or deficiencies in the neuronal processing of the visual information. Consequently, testing vision on all three levels is important for the diagnosis and therapy of ophthalmic diseases.

Subjective methods, which require patient feedback, and objective methods are available. Subjective techniques use the perception of the patient evoked by an optical stimulus to test the function of the visual system. They have the drawback to measure only the integral performance of all steps in vision and need cooperation of the patient. Subjective vision testing ranges from simple vision tests to perceptive investigations after stimulating single photoreceptor cells using adaptive optics [1, 2]. However, quantification is challenging since data are inherently based on subjective observations.

Objective methods measure visual function directly and give quantitative results on certain functional parameters, such as refraction, transparency of the optical media, field of view, or light sensitivity. Furthermore, objectively measuring visual function may often be the only option for children, elderly people, and other non-cooperating patients. A number of devices are currently used for healthy and pathological eyes, which include autorefractors, wavefront sensors, slit lamps, scanning laser ophthalmoscopes, and optical coherence tomography (OCT) devices. The state-of-the-art clinical method for an objective measurement of retinal function is electroretinography (ERG) [3]. ERG measures electrical potential changes of photoreceptor cells, inner retinal cells, and ganglion cells in response to specific stimuli. Multifocal ERG even provides spatial resolution, but the weak ERG signal does not allow testing of single cells.

However, there is currently no objective method to measure the function of individual photoreceptor cells, neurons, or their interaction. These measurements require sensitivity to the biochemical processes of vision and its neuronal processing; it also needs to be combined with cellular spatial resolution—a combination that is hardly achieved for human imaging today. Non-optical imaging by computer tomography (CT), ultrasound (US), or magnetic resonance imaging (MRI) does not provide the necessary spatial resolution for photoreceptor cells and neurons. Optical imaging may have the necessary temporal (millisecond) and spatial (2 μm at full pupil size) resolution but lacks contrast for the relevant biochemical processes. Although ion and voltage sensitive fluorescent dyes [4] are available to study cellular function in cells, organ cultures, or animals, these dyes are not applicable since they are not approved for human use. And even if they were, the axial resolution of fluorescence imaging is not sufficient to separate the different layers of the human retina. Only the visual cycle itself, i.e., the change of opsins during optical stimulation, has intrinsic spectroscopic contrast and can be studied with high spatial resolution by densitometry [5] and two-photon excited autofluorescence [6]. Both methods give molecular information on the vision cycle and the regeneration of the retinal photopigment.

Changes of the photopigments are measurable since their interaction with light is intrinsic to their function. But the following steps of the photo-transduction cycle are biological processes that involve only changes of the concentration of optically non-active biomolecules and ions, as well as changes of the electrical potentials. In vivo, these changes are not directly measurable by optical means. However, secondary effects like changes of the refractive index, scattering, birefringence, or morphology [7, 8] have been suggested for a non-invasive measurement. Optical changes of the tissue that are correlated with retinal activity are called intrinsic optical signals (IOS). Despite their unspecific origin and small signal level, they allow non-invasive optical measurements of the activity of photoreceptor and neuronal activity [8, 9]. Considerable efforts were invested since the end of the seventies to detect IOS [7, 8, 10]. Changes of scattering, reflection, or birefringence and also geometrical changes of photoreceptor and neuronal cells were detected ex vivo and in animals.

However, transferring these results to imaging of the retina in humans faced considerable problems. Ocular aberrations and the restricted numerical aperture confine imaging resolution. The sensitivity to changes of the optical tissue properties is limited by shot noise of the photons. Increasing irradiation levels on the retina improves the signal-to-noise ratio but is limited by the permissible irradiance of the fundus. Ocular motion causes blur and other artifacts in the images. Still, IOS were measured in vivo using fundus photography, laser scanning ophthalmoscopy (SLO) [11], and optical coherence tomography (OCT) [12]. The observed signals were dominated by noise since, as the vision process is associated with only small changes in scattering of membranes and cell organelles. Hence the observed intrinsic signals were of low quality, corrupted by motion artifacts [13], difficult to interpret and their physical origin was unclear. First hints, that phase sensitive imaging might be the key to reliably detecting IOS, were found by Jonnal et al. [14]. Using a high-speed flood illumination retina camera equipped with adaptive optics (AO), fluctuations of the reflections of cones were observed under stimulation. These fluctuations were explained by interference of the reflections of both ends of the outer segment, which is very sensitive to changes of the length or the refractive index of the outer segments. A quantiative analysis of these interferences is possible by OCT.

#### **8.2 Holographic Optical Coherence Tomography**

Fourier domain OCT (FD-OCT) is the state-ofthe-art technique for imaging the human retina. It obtains detailed cross-sectional and three-

**Fig. 8.1** Cross-sectional OCT image of human retina with the corresponding cellular structures

dimensional images that visualize anatomical structures and their pathological changes (Fig. 8.1) and is now indispensable for diagnosing a manifold of clinically important diseases [15–17]. OCT is based on interference [18] and detects not only the intensity of scattered light with an axial resolution of a few micrometers, but also optical path length changes, which are calculated from the phase of the interference pattern that generates the OCT signal (Fig. 8.2a). The phase carries information on morphological alterations in the nanometer range or percentage changes of the index of refraction. In addition, the phase information can also be used for numerical correction of aberrations [19] allowing imaging of the retina with single cell resolution. Combining aberration correction with interferometric measurements allows OCT to image quantitatively minute retinal changes.

Unfortunately, in retinal imaging the phase is usually dominated by ocular movements [20, 21]. These include drift, tremor, and microsaccades [22, 23]. They are an integral part of the vision process and cannot be avoided completely. Microsaccades are sporadic, very fast movements, which reach amplitudes of up to 300 μm and a speed of 10 mm/s. They occur every few seconds but can be suppressed for some time. Between the microsaccades the retina drifts continuously at a speed of about 100 μm/s. An additional tremor causes 10–100 Hz oscillations with amplitudes in the order of the lateral resolution. Finally, axial motion of about 6 μm is caused by the pulsation of the choroidal vessels [24]. The influence of these ocular motions on the phase in the OCT images depends on the speed of OCT imaging and the sampling pattern of the voxels.

In fact, much effort was spent to obtain phasestable data, i.e., data with usable phase information, in order to show some of the advantages of phase-stable imaging. Today's clinical FD-OCT systems measure 100,000 A-scans/s and raster scan the tissue in both lateral directions (Fig. 8.2b). Full phase stability is established in each A-scan and over parts of a B-scan which is acquired in less than 10 ms. These OCT systems still do not reach the required phase stability to detect nanometer morphological changes with the required precision.

Imaging without lateral phase noise is possible by full-field (FF) OCT. Instead of using a scanning beam to image an area of the retina, it combines a collimated illumination of the sample with detection by an area camera (Fig. 8.2c). Using the time domain principle, phase stable imaging of en face planes within the retina was demonstrated [25]. By translating the reference mirror, volumes of 1.5 mm × 1.5 mm × 1.4 mm were measured in 1.3 s [26]. However, within the 5 ms needed to move from one axial plane to the next, the phase relation between the en face images is lost.

Only significantly faster imaging can provide full three-dimensional phase stability. Utilizing a wavelength swept light source for spectral detection of the interference, full-field swept-source (FF-SS) OCT [27] measures a complete retina volume (Fig. 8.2d) in less than 10 ms [28, 29]. In these volumes, retinal motion affects mostly the spectral phase of all voxels, which reduces resolution and sensitivity similar to an unbal-

**Fig. 8.2** Optical coherence tomography (OCT) for volumetric imaging of the retina. (**a**) Interference of polychromatic, broadband light is used to detect the distance of a scattering structure with high accuracy. Spectral intensity *I*(*λ*) of the interference depends on the differences of the pathlengths *zR* und *zS* in reference and sample arm, respectively, and on the corresponding intensities *IR* and *IS*. (**b**) Scanning Fourier domain (FD)-OCT imaging is currently used in clinical diagnosis. The axial information (amplitude and phase) is measured in parallel and the beam is raster scanned across the tissue to image the volume. (**c**) Full-field time domain (FF-TD) OCT measures en face

images in a few hundred microseconds. By scanning the en face plane, the full retina is acquired within seconds. The relative phase in each plane is not influenced by ocular motion, but the phase between the different planes is. (**d**) Full-field swept source (FF-SS) OCT images the full volume within 10 ms. After correction of global axial motion, high resolution images of scattering amplitude and phase are reconstructed, which are free of motion artifacts. Volumetric imaging with minimal influence of eye motion on phase stability is achieved by parallel acquisition of lateral and axial information

anced group velocity dispersion. This effect can be corrected by numerical post-processing [29, 30]. Hence, FF-SS-OCT gives access to the full three-dimensional phase information during in vivo retinal imaging [29].

Using the phase information, it becomes possible to refocus acquired images post factum [19, 31]. Even higher order aberrations can be corrected, if the exact aberrations are known or if a method is conceived that yields them [31]. A system, such as FF-SS-OCT, that can obtain the entire phases, is, in a sense, a truly holographic OCT system—one that captures the whole information about the backscattered light field. We successfully developed such a system for retinal imaging to measure IOS.

#### **8.2.1 Optical Setup**

The FF-SS-OCT setup used for imaging IOS of the retina (Fig. 8.3a) is described in detail elsewhere [31, 32]. In short, wavelength-swept light was generated by a tunable laser (Broadsweeper BS-840-1, Superlum, Ireland), which has a central wavelength of 840 nm and a bandwidth of 50 nm. Its light was split by a fiber coupler into reference and sample arm. The sample light was collimated onto the area of interest on the retina, which then was imaged onto a high-speed camera (FASTCAM SA-Z, Photron, Japan). Superimposition with the collimated reference beam generates a hologram-like interference pattern, which was acquired with 60,000 frames per second. To get OCT volumes, 512 of those images were acquired during each wavelength sweep. Since the camera needs to store those images on internal memory, we were restricted to 70 volumes before the camera memory was exhausted.

For measurements of IOS, the retina was stimulated with a white light pattern. To this end, light of an LED, which was spatially modulated either by a simple mask or a modified projector, was coupled into the sample arm, such that a pattern was projected sharply onto the retina. A low-pass filter in front of the camera ensured that no light from the stimulation reached the camera. Synchronization of camera, swept laser, and projector was governed by an Arduino Uno microprocessor board. For measurements of IOS, healthy volunteers were positioned with a custom fit face mask in front of the setup to ensure repeatable measurement positions. The thermoplastic masks we used were originally designed for radiation therapy of the head [33] and proved to be a good alternative to a bite bar. After choosing an area of interest, fixation targets were selected and fixed in appropriate places. We then obtained OCT volumes at different areas with different stimulation intensities, stimulation frequencies, and time intervals between the stimulations. Written informed consent was obtained from all subjects. Compliance with the maximum permissible exposure (MPE) of the retina and all relevant safety rules was confirmed by a local safety officer. The study was approved by the ethics board of the University of Lübeck (ethics approval Ethik-Kommission Lübeck 16-080).

**Fig. 8.3** (**a**) Full-field swept-source optical coherence tomography (FF-SS-OCT) setup for measuring intrinsic optical signals (IOS) of human retina. (**b**) Reconstructed volume of the retina

#### **8.2.2 Data Evaluation**

FF-SS-OCT lacks a confocal gating. Therefore, it is highly vulnerable to multiple scattering and internal reflections within the setup. While multiple scattering is not severe in the neuronal retina, internal reflections show up as static horizontal lines and are hard to prevent. However, in contrast to any internal reflections, the retina itself is under constant movement. Averaging all 70 volumes in a dataset will therefore result in phase washout of the retinal structures, leaving only the static structures that are due to internal reflections. First step of the reconstruction is simply a subtraction of this averaged volume from each volume in the dataset.

The second step in the data evaluation is a standard Fourier domain OCT reconstruction. This is achieved by Fourier transforming all 512 images in each volume along the image number axis, i.e., the spectral axis, in order to retrieve the depth information. The Fourier transform of all 512 different wavelengths yields an A-scan for each pixel of the camera.

Since axial motion during the sweep changes the optical path length for each wavelength, it manifests in axial blurring that is essentially identical to group velocity dispersion (GVD) mismatch between reference and sample arm. We corrected this by multiplying the spectral data with an appropriate phase factor that compensates for the induced phase changes. Different methods can be used to obtain the correct phase factors [29–31]. In our scenario the phase factors were determined iteratively by optimizing axial resolution using an image-sharpness metric.

In OCT imaging ocular aberrations also manifest in phase factors. In contrast to the aforementioned phase factors, aberration related phase factors are applied to the two-dimensional lateral Fourier transform of each en face plane, instead of the axial Fourier transform of each A-scan. Thus, the aberrations were also corrected after the acquisition of the volumes by optimization of images sharpness [31]. To this end, wavefront phase errors were represented by a linear combination of Zernike polynomials and this combination was varied in an optimization using a simplex-downhill algorithm and a gradient descent algorithm until the chosen sharpness metric, here the Shannon entropy, was minimized.

As final pre-processing step, prior to the actual phase evaluation, the layers and pixels that actually carry the information about the outer segment length need to be extracted. This can be achieved by various methods for co-registration of the different volumes and segmentation of the inner segment/outer segment junction (IS/OS). The outer segment tips are then assumed to be in constant distance to the segmented IS/OS. With these layers having been segmented, we can finally move to the phase evaluation.

The phases in any layer do not carry information about expansion of the photoreceptors. For this reason, one can only evaluate phase changes when comparing phases between two distinct layers and between two different time points. Therefore, each pixel of each reconstructed volume was first referenced to the respective coregistered pixel in one specific volume that was acquired prior to the beginning of the optical stimulus by subtracting the respective prestimulus phase from each phase value. This removes any spatial random dependence of the phase but leaves its temporal evolution untouched. Afterwards, the phases in the two layers of interest, i.e., the outer segment tips and the inner segment/outer segment junction, were extracted by complex averaging of the OCT signals of the segmented layers and their axially adjacent points. Finally, changes in the optical path length of the photoreceptor outer segments were obtained from the phase difference of the two layers [32].

The overall data evaluation after acquisition required multiple hours on standard PC hardware. A particular challenge is the robustness of segmentation and co-registration, in particular with respect to robustness among different retinas and different areas within the retina. Both, co-registration and segmentation, are further aggravated by the low signal-to-noise ratio of the FF-SS-OCT data and its increased imaging artifacts.

#### **8.3 IOS of the Human Photoreceptor Cells**

During optical stimulation, time series were recorded with up to 70 volumes. After reconstruction we visualized single photoreceptors (Fig. 8.4a–c) fixed at their position in the field of view over several seconds. No changes in light scattering were observed after optical stimulation (Fig. 8.4a–c). However, calculating the phase difference between the inner segment/outer segment junction (IS/OS) and photoreceptor tips (Fig. 8.4d) showed a clear increase of the optical path length across the outer segments. This elongation is limited within the optical resolution of the eye to the area of stimulation (Fig. 8.4e). On the retina an image of the stimulating pattern is created by the phase difference evaluation. Aberration correction makes it possible to assign the IOS to single photoreceptor cells or rather identify photoreceptor cells which did not contribute to the IOS even though they were stimulated (arrows). During a repeated measurement, few minutes later, the specific cones that did not contribute to the IOS, again

**Fig. 8.4** Retinal imaging and response to an optical stimulus. (**a**–**c**) En face planes showing the backscattered intensity from the outer segment at the beginning of the stimulus, after 180 ms, and after 384 ms. Cone photoreceptors are clearly visible, but no change in the observed backscattering is detected. (**d**) Cross-sectional view (B-scan) from the center of the recorded volume. Changes of the optical path length within the outer segment were evaluated in this study. (**e**, **f**) Spatially resolved changes of the optical path length ∆*l*, 247 ms after switching off the stimulus pattern. A magnified view is shown in lower left inset. The measurement was repeated twice with a time interval of about 10 min. Response is reproducible in the stimulated region. Noise was reduced by a lateral Gaussian filter. Scale bars are 200 μm. **d**–**c** reprinted with permission from Ref. [32]

lack a response (Fig. 8.4f). However, it is currently not clear whether these photoreceptors truly do not react or whether the missing IOS is an imaging artifact.

For the evaluation of phase differences across the outer segments, resolving of single photoreceptors is not necessary. Even without large pupil diameters and/or aberration correction, clear phase signals were measured (Fig. 8.5a–d, g–j). For a short optical stimulus, the time course of the averaged phase differences was analyzed (Fig. 8.5k). First the optical path length in the outer segments shrinks by a few nanometers and then an elongation is observed, which continues long after the stimulation of the retina stopped. The time course of this elongation is highly reproducible in one person. However, the expansion rate differs from subject to subject although the general time course remains. The axial resolution of our FF-SS-OCT system is barely sufficient to discriminate the outer segment tips of rods and cones by their length. Still, different characteristic time courses with characteristic expansion rates were observed, when the phase signals from different depths was evaluated (Fig. 8.6a). The density of rods and cones depends on the retinal location. The macula contains mainly cones; the periphery is dominated by rods. The outer segment elongation in the periphery was faster than that one in the macula (Fig. 8.6b). This corroborates a different time course of the outer segment elongation of rods and cones.

Though we used different intensities of the stimulus, the initial expansion rate seems to be independent of both the duration and the intensity. Only the total duration of the expansion and, consequently, also the expansion amplitude changed. Evaluating the change of the optical path length in the outer segments, we obviously did not observe effects like bleaching or conformational changes of rhodopsin or neuronal activities, which are directly linked to the strength of the stimulus.

#### **8.3.1 Molecular Origin**

Using non-invasive optical measurements, retinal function has been investigated by different groups for quite some time [8, 9]. In humans, first IOS were measured in photoreceptors using highspeed (192 Hz) retinal imaging with flood illumination and adaptive optics, which resolved single rods [14]. Clear changes in the intensity of backscattered light were observed for each cone upon optical stimulation. Later IOS were also observed with an adaptive optics SLO [11]. However, brightness of the reflected light increased or decreased randomly from cone to cone, which was explained by interference between light reflected from the IS/OS junction and the end tips of the cones. Both are highly scattering structures very well visible in OCT images (see Fig. 8.1). Depending on the length of the outer segments, an increase in the optical path length will increase or decrease the intensity of the interference depending on whether the reflected light interferes constructively or destructively, respectively. This ambiguity of the interference signal makes evaluation of the data difficult [34] and averaging over several cones cancels the signal. Hence resolving single photoreceptors by adaptive optics was essential. Using the interference internally in the outer segment, the observed signal is insensitive to axial motion but fails to give quantitative information about the morphological changes.

In contrast, OCT imaging gives direct information on phase changes in scattered light. The difference of the phase in the OCT signal at the IS/OS junction and the photoreceptor tips unequivocally determines path length changes unequivocally for every location on the retina. It is not necessary to resolve the photoreceptors. Averaging over larger areas with the same response is possible and increases the signal-tonoise ratio. Using FF-SS-OCT, we were able to measure these nanometer optical path length changes with millisecond resolution even in humans, despite the strong ocular motion. After a slight reduction of the path length during the stimulation, an increase up to half a micrometer was observed. With a typical length and index of refraction of the outer segments of 12 μm and 1.41, respectively [35, 36], this corresponds to 3% elongation. In mice the path length even increased up to 2 μm or 10% under strong light stimulation [37]. Recent measurement tech-

between IS/OS junction and the photoreceptor tips in response to a 50 ms stimulus; after a small decrease of the optical path length after 15 ms (**b**) a steady increase is observed (**c**, **d**). (**e**, **f**) Time course of the outer segment elongation was calculated by averaging over the stimulated area. Response of seven individual measurements (**e**, gray lines) shows the reproducibility. The black line shows the average. Dashed 

by OD 0.3 and OD 1.3 filters does not change the rate of elongation, only the maximum elongation. (**l**) Response for 50, 500, and 3000 ms stimuli. The start of the stimulus (**e**, **f**, **k**, **l**) is marked by the vertical dashed black lines; green areas (**e**, **f**, **k**) indicate the duration of the stimuli. Scale bars are 200 μm. Reprinted with permission from Ref. [32]

**Fig. 8.6** Response of rods and cones to optical stimulation. (**a**) Evaluation of phase difference between IS/OS and different layers at the tips of the photoreceptors and the retinal pigment epithelium ( RPE) gives different time

rotransmitter, which either causes depolarization by rods, the optical path length increases faster than the macula, which contains only cones

courses. (**b**) Kinetics of outer segment elongation depends on retinal location. In the periphery, which is dominated

niques using adaptive optics OCT with MHz A-scan rate, were also able reproduce the elongation of the outer segments in humans [38].

The observed changes of the optical path length are obviously caused by the biochemical processes within the outer segments. Here, the retina performs an extremely complex task converting photons into neuronal activity, which is eventually processed in the visual cortex. The vision process starts with the absorption of a photon by the molecule rhodopsin. The molecule is composed of the protein opsins which binds to the retinal molecule in 11-cis conformation. Absorption of light changes retinal to the alltrans conformation [39, 40]. Next, several biochemical amplification steps follow. The G protein transducin is activated by activated rhodopsin, by catalyzing an exchange of bound guanosine diphosphate (GDP) with guanosine triphosphate (GTP). This leads to the dissociation of a subunit of transducin, which then activates phosphodiesterase (PDE). Cyclic guanosine monophosphate (cGMP), which keeps Na+ channels in the plasma membrane open, is hydrolyzed subsequently by the activated PDE to 5′-Guanosine monophosphate (GMP), the channels close, and the Na+ concentration in the photoreceptor cell drops together with the voltage across the plasma membrane due to continuous active discharge of Na+ from the photoreceptor cells. As result, the photoreceptors hyperpolarize and reduce their release of neuor hyperpolarization in the bipolar cells, depending on their subtype. Here and in a second inner neuronal layer of the retinal ganglion cells the visual information is processed before it is transmitted via non-myelinated axons to the brain. These axons form the nerve fiber layer and the optic nerve which leave the eye ball at the optic disk.

Although the observed elongation of the optical path length is clearly connected with the activity of the photoreceptor cells, it is unclear from which particular molecular process it originates. In the phototransduction cascade there are many opportunities for alteration of the optical path length. Change in absorption spectra of the photosensitive molecules or changes in the cellular concentration could lead to an alteration of the optical density and the refractive index. Generally, the index of refraction of solutions scales linearly with concentration of the solutes [36, 41]. However, under physiological conditions reasonable concentration changes would merely lead to a change of a few nanometers. For example, the index of refraction of phosphate buffered saline changes with only 0.01 per mol concentration change [42]. Concentration changes of 100 mmol would change the optical path length by less than 10 nm. This is much smaller than the observed elongation of the optical path length that reaches several hundred nanometers. Therefore, it seems reasonable that the increase of the optical path length is caused

by a physical elongation of the outer segment, i.e., we observe conformational change of the outer segment by analyzing the phase of the OCT signal. Morphological changes of cells can be caused by different mechanisms. Cells change their shape actively with help of structural proteins like actin [43] or by changing the intracellular volume or the surface tension of the membranes [44]. The most likely explanation for the observed conformational change of the outer segment appears to be a compensation for osmotically driven volume change [37]. However, it is not completely clear which kind of concentration changes could lead to volume changes of several percent. The relocation of G-protein transducin was put forward as a driving mechanism for water influx which compensates the increase of osmotic pressure [37]. But also, active actinbased processes were observed in teleost rods, which lead to an elongation [45, 46].

#### **8.3.2 Technical Limitations of FF-SS-OCT**

To achieve holographic OCT three ingredients were key: First, we needed to acquire data as fast as possible in order to reduce eye motion during acquisition. Second, we needed to eliminate scanning artifacts by using FF-SS-OCT [28] and third, numerical motion correction, segmentation and registration had to be used to extract the phase information. Uniquely, holographic OCT imaging with more than 100 Hz volume rate and complete phase information was achieved, which no other imaging technology could provide so far. However, to allow imaging of the human retina with this technique, cameras with acquisition rates above 20 kHz are required, 60 kHz are preferable [29]. At the latter acquisition rate, we image the retina with up to 40 MHz A-scan rate. An additional advantage of our full-field imaging technique is that the slowest axis, and thereby also the most vulnerable to phase errors, is the axial direction. In this axis phase errors due to global motion are identical in all A-scans and can be removed numerically [29, 30]. The frame rate of 20–60 kHz which is need for reasonable fields of view are currently only achieved by complex, bulky, and expensive high-speed CMOS cameras. For this reason, the otherwise simple and low-cost setup becomes currently highly expensive.

Besides many advantages, which are provided by FF-SS-OCT, there are also some disadvantages and restrictions. Image quality in each volume is comparably poor. This is caused by the fast imaging speed, which leads to a short integration time and therefore to a low number of detected photons per voxel. The low image quality combined with the huge amount of data which are generated by the fast imaging represents a serious challenge for the post-processing. For a phase sensitive evaluation of the measurements it is important to reference the phases to the exact same space, preferable with sub-pixel precision. For the detection of IOS in the retina, one major challenge in post processing is co-registration and segmentation of the region of interest. The huge data size and the high request in precision lead to long computation times accumulating to several hours.

The parallel imaging of the whole field of view precludes the suppression of multiply scattered light as confocal imaging does. As a consequence, the image quality suffers from crosstalk and is sensitive to reflections of the imaging optics including the cornea of the eye. Fortunately, the neuronal retina is not strongly scattering and ballistic photons by far dominate the imaging. However, imaging the choroid is severely degraded by multiple scattering from the retinal pigment epithelium (RPE).

#### **8.3.3 Outlook**

Phase sensitive measurements with FF-SS-OCT deliver unique information that strongly suggests clinical applications. So far, the diagnostics of diseases of the retina are mainly based on ERG or morphological changes seen in OCT imaging. With a functional analysis of the retina, an earlier diagnosis or a better therapy monitoring could become possible. Therapies could be adjusted, and unnecessary medication or treatments may be avoided. Furthermore, the retina is developmentally and anatomically the only sensory system, that is part of the central nervous system (CNS) [47]. Therefore, many neurodegenerative diseases of the CNS like Parkinson's disease [48, 49], Alzheimer's disease [50–54] and multiple sclerosis [55–57] correspond with morphological changes in the retina. It is reasonable to assume that functional changes precede those changes as well. Thus, the detection of IOS could expand from a clinical application in the retina to a clinical application of the whole CNS.

In the long run, phase sensitive detection of IOS can contribute to the basic research of neuronal behavior, wiring, and post-processing in the retina and thereby lead to a better understanding of vision and general conclusions about the behavior of the CNS. However, for a wellfounded research with clinical value it is necessary to unambiguously resolve the molecular origin of the IOS. Therefore, the next steps should be to resolve this question by further ex vivo and animal experiments, where different pathways of the molecular processes could be turned off and thereby excluded as possible molecular origin. For an investigation of neuron wiring it is necessary to visualize those changes in optical path length for other cell layers as well, for instance, for the inner nuclear layer, including the bipolar and amacrine cells, or the ganglion cell layer. The signals from those layers are expected to be weaker than the signal of the photoreceptor outer segments. Additionally, motion artifacts from the vessels, corrupting the IOS, become stronger in these layers. Provided cellular function causes here change in optical path length as well using sophisticated post-processing, changes in the optical path length should be visible in these depths, too.

#### **References**

1. Bruce KS, Harmening WM, Langston BR, Tuten WS, Roorda A, Sincich LC. Normal perceptual sensitivity arising from weakly reflective cone photoreceptors. Invest Ophthalmol Vis Sci. 2015;56(8):4431–8.


patients with head and neck cancer. J Appl Clin Med Phys. 2013;14(5):243–54.


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Two-Photon Scanning Laser Ophthalmoscope**

Tschackad Kamali, Spring RM. Farrell, William H. Baldridge, Jörg Fischer, and Balwantray C. Chauhan

#### **9.1 Introduction**

Two-photon excitation (TPE) fluorescence imaging is a powerful emerging tool in biomedical applications, providing high penetration depth and inherent three-dimensional (3-D) sectioning at the subcellular level [1–3]. The retina is the only tissue in which single neurons can be imaged optically and noninvasively due to the high transparency of the preretinal tissues [4, 5]. TPE with infrared light (IR) is particularly well suited for

*Portions of the text in this Chapter from 'Simultaneous in vivo confocal reflectance and two-photon retinal ganglion cell imaging based on a hollow core fiber platform. J Biomed Opt. 2018 Mar;23(9):1–4. doi: 10.1117/1. JBO.23.9.091405.' Reproduced under a Creative Commons Attribution 3.0 Attribution License (https://creativecommons.org/licenses/by/3.0/legalcode).*

*in vivo* retinal imaging. Reporter molecules in cell bodies can be excited with IR light to allow differential activation of rod and cone photoreceptors by wavelengths in the visually sensitive range to evoke responses in the retina [6–9]. An additional advantage of TPE is reduced phototoxicity [8]. TPE fluorescence imaging enables the study of functional physiological processes, which, in combination with *in vivo* ophthalmoscopy, represent powerful imaging techniques that are well suited for noninvasive *in vivo* retinal imaging. Applications include longitudinal tracking of disease progression, for example, in optic neuropathies in which retinal ganglion cells (RGCs), the output neurons from the eye to the brain, are lost.

T. Kamali (\*) · J. Fischer

Heidelberg Engineering GmbH, Heidelberg, Germany

S. RM. Farrell Retina and Optic Nerve Research Laboratory, Dalhousie University, Halifax, NS, Canada

Department of Physiology and Biophysics, Dalhousie University, Halifax, NS, Canada

Department of Medical Neurosciences, Dalhousie University, Halifax, NS, Canada

W. H. Baldridge Retina and Optic Nerve Research Laboratory, Dalhousie University, Halifax, NS, Canada

Department of Medical Neurosciences, Dalhousie University, Halifax, NS, Canada

Department of Ophthalmology and Visual Sciences, Dalhousie University, Halifax, NS, Canada

B. C. Chauhan Retina and Optic Nerve Research Laboratory, Dalhousie University, Halifax, NS, Canada

Department of Physiology and Biophysics, Dalhousie University, Halifax, NS, Canada

Department of Ophthalmology and Visual Sciences, Dalhousie University, Halifax, NS, Canada

**9**

<sup>©</sup> The Author(s) 2019 195 J. F. Bille (ed.), *High Resolution Imaging in Microscopy and Ophthalmology*, https://doi.org/10.1007/978-3-030-16638-0\_9

#### **9.1.1 Retinal Signaling**

The retina is a photosensitive neuronal tissue that transmits visual information to the brain. Light, captured and focused by the eye, is converted from individual photons to an electrical signal that is propagated to the brain by action potentials. In the photoreceptors, isomerization of retinal is catalyzed by photons of incident light, which initiates a series of intracellular signaling events that converts the light stimulus into a chemical signal that is propagated through interneurons (horizontal cells, bipolar cells and amacrine cells) to the retinal ganglion cells (RGCs, Fig. 9.1) [10]. RGCs receive input from multiple bipolar and amacrine cells and encode these signals prior to transmitting this information to the brain via the optic nerve. Competing excitatory and inhibitory stimuli detected within an RGC's dendritic field are integrated to determine a net increase, decrease or null change in action potential firing. Much of this RGC processing generates visual acuity and contrast sensitivity [11]. There are many subtypes of RGCs with varying functional properties in the retina, which have been characterized at the single cell level *ex vivo* [12]; however, *in vivo* functional analysis of individual RGCs is lacking.

#### **9.1.2 Imaging Retinal Neurons**

Numerous studies have introduced fluorescent tracer molecules into RGCs to enable the visualization of retinal cells in experimental models, *ex vivo* [13]. These studies have provided an immense body of knowledge about the mammalian retina, including RGC densities, quantification of axonal transport and cell loss after axonal injury. However, these studies are cross-sectional in nature; requiring removal of the retina, histologic preparation and microscopic examination. *In vivo* confocal scanning laser ophthalmoscopy (CSLO) established a new longitudinal imaging paradigm that allows individual RGCs labelled with fluorescent reporter molecules (Table 9.1) in the rodent retina to be followed over time [13–

17] (Fig. 9.2). These studies (among countless others) have provided invaluable insights into the progression of dendritic retraction and RGC death after injury; however, inferring information about retinal function (or dysfunction) from structural imaging remains a challenge. Expression of genetically encoded calcium indicators (GECIs), such as GCaMP, in RGCs allows the visualization of neuronal activity [18]. The Thy1-GCaMP3 transgenic mouse line selectively expresses GCaMP in the majority of RGCs [19], which allows a subset of the RGC population to be studied over time. Changes in intracellular calcium levels during action potentials are reported by changes in GCaMP fluorescence, which can be imaged *in vivo* using modified CSLO systems [20]. One challenge that presents itself with *in vivo* functional imaging with CSLO is the overlap in spectral sensitivities of mammalian photoreceptors and the most commonly used fluorescent reporter molecules (Fig. 9.3 and Table 9.1). Murine experimental models can be used, where excitation of UV sensitive cones stimulates the visual pathway and differential excitation of the reporter molecule is possible, allowing for the visualization of functional responses in RGCs (Fig. 9.4). However, this method examines only a portion of the visual system and further, humans do not have UV wavelength sensitive cones, making this approach unlikely for clinical translation. TPE addresses both of these limitations by using brief pulses of IR wavelength light, which is able to penetrate the preretinal tissues and excite fluorophores without stimulating the photoreceptors (Fig. 9.5 and Table 9.1).

Studies have developed TPE imaging systems to examine functional responses of RGCs *in vivo* [9, 21]. In TPE imaging, pulses of long wavelength light (typically in the IR range), approximately double the single photon excitation wavelength (Table 9.1) [22], are used to create conditions where two photons simultaneously stimulate the fluorescent reporter molecule to create energy similar to that of a single short wavelength photon (detailed theory in Sect. 9.2). Yin et al. [21] used adaptive optics

**Fig. 9.1** Schematic representation of cell types shown in a retinal cross section. Incident light, focused on the retina through the lens, travels through the inner layers of the retina to reach the photoreceptors (PRs, rods: blue; cones: red), where visual pathway signaling begins. The signal is propagated from photoreceptors to bipolar cells (BPCs, yellow) to retinal ganglion cells (RGCs, brown), which transmit visual information to the brain via axons that make up the nerve fiber layer (NFL). Horizontal cells (HCs, green) and amacrine cells (ACs, grey) make lateral

connections, which modulate the signal and Müller glia cells (dark purple) act as metabolic support cells in the retina. *RPE* retinal pigment epithelium, *PRL* photoreceptor layer, *ONL* outer nuclear layer, *OPL* outer plexiform layer, *INL* inner nuclear layer, *IPL* inner plexiform layer, *GCL* ganglion cell layer, *NFL* nerve fiber layer. *Modified from: Hartpete (https://commons.wikimedia.org/wiki/ File:Retina.jpg)* [35]*, "Retina", labels modified by S Farrell 2018, https://creativecommons.org/licenses/ by-sa/3.0/legalcode*


**Table 9.1** Exogenous fluorophores commonly used as reporters to image retinal neurons

Data from: [28–30]

(AO, described in Chap. 16) in conjunction with TPE to image RGC activity, reported by GCaMP5, in non-human primate retina. Similarly, but without the use of AO, Bar-Noam et al. [9] measured *in vivo* changes in GCaMP6 fluorescence in RGCs, in response to stimulation of the visual pathway. This study demonstrated *in vivo* calcium transients in RGCs similar to those reported previously in *ex vivo* retinal preparations [12]. Together, these studies provide a foundation to further develop TPE as an imaging platform to study RGC function *in vivo*; however, clinical translation requires further refinement.

**Fig. 9.2** Longitudinal *in vivo* CSLO images of fluorescent reporter molecules expressed in RGCs of the murine retina before and after optic nerve transection (ONT). (**a**) Transgenic Thy1-YFP line H mice express YFP in <0.5% of RGCs. Top: Baseline *in vivo* imaging shows sparse labeling of a few RGCs, including their dendrites and axons extending to the optic nerve head. Bottom: Follow-up imaging of the same subject, in the same retinal location, 7 days post-ONT shows fewer YFP labelled RGCs and retraction of remaining RGC dendrites. (**b**) Transgenic Thy1-GCaMP3 mice express GCaMP3 in approximately 65% of RGCs. Top: Baseline imaging shows many labelled RGCs, including axons. Bottom: 7 days post-ONT, dramatic RGC loss can be observed. All images show a 30° field of view

**Fig. 9.4** *In vivo* CSLO images of calcium responses in RGCs, reported by GCaMP3 in the Thy1-GCaMP3 mouse retina. *In vivo* single photon CSLO images taken with the Spectralis Multiline. Left: baseline GCaMP3 fluorescence (488 nm excitation). Middle: UV (365 nm) light-evoked

#### **9.1.3 Imaging Other Retinal Cell Types** *In Vivo*

Several endogenously expressed molecules have been found to exhibit fluorescence and lend themselves to act as reporter molecules

response of the visual pathway increased GCaMP3 fluorescence in some cells (red box), while other cells decreased fluorescence during UV stimulation (blue box). Right: post-light stimulus GCaMP3 fluorescence. Each image reflects the integration of approximately 25 s of recording

(Table 9.2), but require single photon excitation by UV wavelengths, which are absorbed by the cornea and lens and also have phototoxic effects in the human retina. Using TPE, these endogenous fluorophores can be excited *in vivo* and act as endogenous reporter molecules. Recent

**Fig. 9.5** Schematic representation of differential activation of the visual pathway and excitation of GCaMP. Left: Flashes of LED light stimulate photoreceptors (S- and M-cones, blue and green light, respectively) and activate the visual pathway. Right: In between LED flashes, the IR femtosecond pulse laser (930 nm) stimulates GCaMP3 in

**Table 9.2** Endogenous fluorophores expressed in retinal cells


Data from: [31–34]

a Examples for demonstration purposes, other excitation wavelengths are possible, due to the presence of multiple peaks and broad TPE spectra of endogenous molecules

the RGC layer. Due to the 2P effect, localizing laser energy in the RGC layer, photoreceptors are not be activated by the IR laser, thus effectively separating visual pathway and reporter excitation. *Modified from:* [35]*, "Retina", labels and colour modified by S Farrell 2018, https://creativecommons.org/licenses/by-sa/3.0/legalcode*

studies have imaged retinal structures of the inner and outer retina, with single cell resolution, using AO enhanced TPE [23–27]. In addition to gaining structural insights, fluorescence lifetime information, can be extracted from these data. Changes in fluorescence lifetime (described in Chap. 10) of endogenous molecules, such as NAD(P)H and FAD, has been suggested as a functional measure of cellular activity and health in the retina [26]. Thus, together, TPE and fluorescence lifetime measurements may provide functional data from the retina without the use of exogenous reporter molecules, which may allow tracking of disease progression in patients.

#### **9.2 Theoretical Background**

#### **9.2.1 Luminescence, SPA and TPA**

Electronic excitation of molecules can be performed either by a physical (absorption of light), chemical or mechanical process. Luminescence describes the effect of a molecule emitting light after de-excitation into the ground state. If this excitation was created by the absorption of photons, then it is called photoluminescence. Photoluminescence of molecules can be divided into two groups, fluorescence and phosphorescence, differing in electronic configuration in the excited state and emission pathways [37].

Fluorescence is the ability of some atoms and molecules to absorb photons with a particular energy and to re-emit photons with reduced energy (red shifted) after a short time interval in the nanosecond time scale, referred to as fluorescence life time. Phosphorescence differs from fluorescence by the electronic transition pathway (intersystem crossing) into the excited triplet state resulting in a much longer excited state lifetime in the range of milliseconds to hundreds of seconds [38].

Molecules can transit from the ground state (lower energy) to the excited state (higher energy) by absorbing photons with an energy being at least equal to the energy difference between the excited and ground state. This electronic excitation can be achieved either by linear or nonlinear photon absorption. Linear excitation of the molecule is achieved by single photon absorption (SPA) whereas nonlinear absorption describes the case when two or more photons with less energy (as compared to the single photon excitation) combine to bridge the energy gap needed to excite the atom or molecule. Most widely used nonlinear excitation in biomedical research is two-photon absorption (TPA), where two photons with half the energy of SPA combine for electronic excitation of the molecule [39].

From the electronic excited state the molecule can return to the electronic ground state either by non-radiative relaxation, by emitting a photon with longer wavelength (fluorescence) or by phosphorescence after intersystem crossing. All three phenomena are depicted in Fig. 9.6.

Combination of confocal scanning laser microscopy (CSLM) with SPA or TPA allows one to generate two-dimensional fluorescence images of the specimen under investigation. Within the next sections, CSLM with SPA will be termed linear fluorescence imaging (LFI) and CSLM with TPA will be termed two-photon excitation fluorescence imaging (TPEFI).

#### **9.2.2 TPA Probability and Dependencies**

The probability for a two-photon absorption (TPA) process to occur, is dependent on the physical properties of the molecule (termed as TPA cross-section (*σ*2)) and the spatiotemporal properties of the excitation light. TPA requires the "simultaneous" arrival of two photons (time interval within 10−18 s) wherefore it has a quadratic dependency on the average incident light power (*Pavg*), making it a nonlinear process. Since the TPA cross sections are usually very low as compared to SPA cross sections [40, 41], tempo-

**Fig. 9.6** Jablonski Diagram depicting one-photon excitation fluorescence, two-photon excitation fluorescence, internal conversion and vibrational relaxation, non-radiative relaxation, intersystem crossing and phosphorescence

ral and spatial confinement is crucial for increasing TPA signal generation.

Temporal confinement is achieved by the use of pulsed laser sources (*fp*) with pulse durations (*τp*) below a few pico-seconds, resulting in high peak powers. The use of pulsed laser sources as compared to continous wave lasers enhances the signal by a factor of ~1/*τpfp*, which would lead to a signal enhancement of 105 by using commonly available femtosecond laser sources with pulse durations of 100 fs and pulse repetition rates of 100 MHz.

Spatial concentration is dependent on the excitation wavelength (*λexc*) and high numerical aperture (NA) objectives, leading to a small focal volume resulting in high peak intensities.

The probability, *na*, for two-photons to be absorbed simultaneously (while neglecting saturation effects and considering paraxial optics) at the focal plane per laser pulse and per fluorophore can be written as [42]:

$$m\_a = \sigma\_2 \frac{P\_{\text{avg}}^2}{\pi\_p f\_p^2} \left(\pi \frac{N\mathcal{A}^2}{hc\mathcal{A}\_{\text{acc}}}\right)^2$$

where

*h* is Planck's constant

*c* is the speed of light

As can be seen from the formula above the laser pulse duration is inversely proportional to the twophoton signal generation, resulting in increased two-photon signal generation when reducing the laser pulse duration. In practice however, short pulses lead to higher dispersion which have to be compensated. Dispersion is a phenomenon in which the phase velocity of a wave is coupled to its frequency resulting in wavelength-dependent refractive indices in optical media. The temporal profile of the laser pulse is directly related to its spectral bandwidth and therefore shorter pulses are more vulnerable to dispersion effects leading to broadened pulses and less efficient two-photon excitation if not compensated [4, 43].

#### **9.2.3 Optical Resolution**

In linear fluorescence imaging, fluorescence photons are also generated above and below the focal plane, wherefore axial sectioning can be enhanced by spatially filtering the emitted fluorescence signal at the detection plane. Even though axial resolution is improved by reducing the pinhole diameter, fluorescence yield is also reduced, since fluorescence photons suffering from chromatic aberrations and strong scattering are blocked and cannot reach the detector [22]. Axial and lateral confinement in TPEFI on the other hand is an intrinsic property of the nonlinear excitation process neglecting the need for spatially filtering the signal by a pinhole.

In TPEFI the effective point spread function (*PSFTP*) can be described by the square of the illumination *PSFill* [42]:

$$PSF\_{\mathcal{T}^p} = \left(PSF\_{\mathcal{U}}\right)^2 \approx PSF^2\left(\frac{\nu}{2}, \frac{\mu}{2}\right)$$

with *v* = *k*(*NA*)*r*, *u* = *k*(*NA*) 2 *z* and assuming *λill*/2 ≈ *λfl*, v/2 and u/2 denoting an about doubled illumination wavelength in contrast to singlephoton excitation.

Considering full illumination of the back aperture of the microscope objective (beam diameter > back aperture diameter) the diffraction limited resolution for TPEFI can be approximated by the Full Width at Half Maximum (FWHM) of a Gaussian fit to the squared illumination PSF. Lateral (∆*r*) and axial fitted (∆*z*) squared intensity PSF profiles are described as follows [37, 38]:

$$
\Delta r = \left\lfloor \frac{0.320 \sqrt{2 \ln 2} \lambda}{NA} \right\rfloor, \quad NA \le 0.7
$$

$$
\left\lfloor \frac{0.325 \sqrt{2 \ln 2} \lambda}{NA^{0.91}}, \quad NA \ge 0.7
$$

$$
\Delta z = 0.532 \sqrt{2 \ln 2} \lambda \left( \frac{1}{n - \sqrt{n^2 - NA^2}} \right),
$$

where NA is the numerical aperture of the objective lens, *λ* the excitation wavelength and n denoting the refractive index of the immersion medium.

At first sight the theoretical lateral and axial resolution of TPEFI seems worse than the case of LFI due to the longer wavelength being used for excitation. In practice however, the spatial resolution of LFI and TPEFI that can be achieved are similar due to the use of finite-sized pinholes which broaden the theoretical PSF in LFI [22, 42].

#### **9.2.4 Linear SPA vs. Nonlinear TPA Imaging**

Linear fluorescence imaging (LFI) by SPA confocal scanning laser microscopy (CSLM) generates fluorescence over the entire excitation light cone (Fig. 9.7a), wherefore nonlinear imaging generates photons only in the vicinity of the focal spot (Fig. 9.7b).

In nonlinear imaging, in particular TPEFI two excitation, photons combine their quantum energies and generate a photon with higher quantum energy which leads to a "bluer" emission as compared to the excitation. This is different from the red shifted emission occurring in LFI, and allows the use of excitation light in the near-infrared (NIR) wavelength range (700–1000 nm) for commonly used fluorescent markers emitting in the visible spectral range [32, 39].

The use of longer wavelengths in TPEFI allows the excitation light to penetrate deeper into scattering tissue, since longer wavelengths exhibit less scattering, and also phototoxic effects are reduced, since less one-photon endogenous absorbers are available at this wavelength [22, 44]. Another major advantage of nonlinear imaging is the nonlinear dependence of the signal intensity (S) to the excitation light intensity (I) which is quadratic for TPEFI (*S* ∝ *I n* ). This quadratic dependency allows the TPA to occur only at the focal volume and its close vicinity (spatially confined excitation) when focusing the laser beam through a microscope. As can be seen in Fig. 9.7b, no TPA fluorescence is created in planes above or below the focal volume which differs from SPA fluorescence, where fluorescence is created over the entire depth of the excitation light cone. The lack of out of focus TPA fluorescence provides advantages for long term *in vivo* imaging of biological tissue, since tissue viability is enhanced by reduced photo damage [22, 44, 45]. TPEFI further provides inherent three-dimensional sectioning capabilities without the need of spatially filtering the emitted light by the use of a confocal pinhole as it is the case for LFI.

Even in strongly scattering media this inherent sectioning capability is maintained, because the density of scattered excitation photons are usually too low for nonlinear signal generation which is important for deep imaging, since all the signal photons reaching the detector originated from the focal volume and its vicinity carrying useful information [2, 22, 42]. However, special care

**Fig. 9.7** In single photon fluorescence imaging, the linear relation between the incident and emitted photon leads to fluorescence generation over the entire light illumination cone (**a**), in two-photon excitation fluorescence imag-

ing, fluorescence is only generated near the focal plane due to the nonlinear relationship between the signal- and illumination-intensity (**b**)

has to be taken by choosing the collection optics for optimizing the fluorescence for deep imaging, because of the increased spatio-angular space due to scattering.

#### **9.3 Experimental Setup and Results**

*From* [46] *Reproduced under Creative Commons Attribution License (CC BY; https://creativecommons.org/licenses/by/3.0/legalcode).*

In this work, we demonstrate, for the first time to our knowledge, a system based on a hollow core fiber (HCF) that is capable of simultaneous *in vivo* confocal reflectance and two-photon imaging of RGCs through the mouse pupil without the use of AO. One strategy to reduce the need for AO in the device presented here is the use of eye tracking software (Heidelberg Engineering, Heidelberg, Germany) that is currently implemented in clinical confocal scanning laser devices for ophthalmoscopy and tomography [47]. The real-time eye tracking software enables prolonged signal collection from the same spot, which is critical at low signal levels and in suboptimal light focusing conditions. The simultaneous acquisition of confocal reflectance and two-photon images colocalizes precisely the retinal location, where the two-photon recordings originate. Together, these features provide images with single cell resolution along with wider field fundus images. The use of an HCF for laser delivery allows our system to be split into a compact application unit (dashed blue area, Fig. 9.8) and a separate laser unit that can be placed on a nearby optical bench (gray shaded area, Fig. 9.8) without introducing noticeable pulse broadening, which is limited to approximately −200 fs2 /m in our system. The application unit itself consists of a modified commercial scanning laser ophthalmoscope and an optical coherence tomography unit (Spectralis, Heidelberg Engineering) used routinely in clinical practice.

The Ti:Sapphire light source (Chameleon Ultra II, Coherent, Santa Clara, California) was tuned to a center wavelength of 930 nm for confocal reflectance imaging as well as two-photon signal generation. The laser had a repetition rate of 80 MHz and produced pulses of 140 fs duration with an output power of 1.6 W at 930 nm. A half-wave plate (HWP) and a polarization beam splitter were used for power adjustment. The group delay dispersion of the complete optical setup amounted to ~7000 fs2 , which was compensated for by a femtosecond pulse compressor based on prism pairs (FSPC, Thorlabs, Newton, New Jersey). The pulse duration at the sample position was measured with an autocorrelator (Mini USB PMT NIR, APE, Berlin, Germany), which confirmed approximately transform-limited pulses of 154 fs. A CCD camera (FireflyMV, FLIR, Wilsonville, Oregon) was used to measure the beam profile at the sample position. Both second-order autocorrelation measurement and the Gaussian beam profile measurements are shown in Fig. 9.9.

After dispersion compensation, the laser beam was coupled to a 2-m HCF (GLOphotonics, Limoges, France) with a 45-mm focal length achromatic lens (AC254-045-B, Thorlabs) with which a coupling efficiency of ~89% was achieved. The coupling lens was mounted on a 25-mm XYZ translation stage (PT3, Thorlabs) and the fiber on a three-axis microblock stage (MBT616D, Thorlabs).

The fiber output was coupled via FC connector to the fiber adapter plate that was mounted to the camera head of the modified Spectralis unit. The divergence of the output beam from the fiber was increased with a −6.0-mm focal length, biconcave, negative lens (LD2746-B, Thorlabs) to avoid the use of a longer focal length collimator before it was collimated with an achromatic doublet lens with a focal length of 25 mm (AC 127-025-B, Thorlabs). Two customized lens pairs of equal focal length, f = 20 mm (L4, L5 in Fig. 9.8), were integrated to enable finer focus adjustment in the axial plane. Horizontal and vertical beam scanning was performed with the standard Spectralis scan unit. In combination with a customized 50-mm focal length scan lens, an intermediate image field of 5 × 5 mm2 was produced. A customized 16-mm focal length tube lens translated the intermediate image field to a field of view of ~17.5° while achieving a beam size of ~2.2 mm (overfilling the dilated mouse pupil). Both reflectance and two-photon fluores-

**Fig. 9.8** Simultaneous confocal reflectance and twophoton imaging setup: the output of the Ti:Sapphire laser is adjusted with an HWP and a polarizing beam splitter. After dispersion compensation with prism pairs, the light is coupled into an HCF with a lens (L1), where both the HCF and L1 are mounted on separate three-dimensional translation stages. The output side of the fiber is connected via an FC connector to the fiber adapter plate (blue dotted line) that is attached to the modified Heidelberg Engineering Spectralis camera head (blue dashed line). The divergence of the fiber output is increased with a negative lens (L2) before coupling it to a double achromatic lens (L3). Two lenses of equal focal length (L4 and L5) were used for fine focal readjustments. Lateral scanning was performed with galvanometric scanners (GS), where the pivot point was imaged on the mouse pupil with a scan and tube lens (L6 and L7, respectively). A dichroic

cence signal were repassed through the scanning unit, resulting in a stationary, descanned light beam. A dichroic mirror (FF735-Di02, Semrock, Rochester, New York) was used to couple the fluorescence and reflectance signal into the detection branch, consisting of an 40-mm focal length achromatic lens and a 100-μm multimode fiber, which guides the signal light to the external detection unit. The fiber output was collimated with a 12-mm focal length lens and a second dichroic mirror (FF735-Di02, Semrock) separated the visible fluorescence light from the near infrared (NIR) reflectance light. The reflectance light was further attenuated with a neutral density beam splitter (DIM1) in the camera head reflected the signal that was coupled to a multimode fiber with a lens (L8). In the detection unit (black dotted line), the output of the fiber was collimated with a lens (L9), and a dichroic mirror (DIM2) separated the fluorescence signal from the reflectance signal. In the reflectance path, an ND filter is used for attenuation. A lens (L10) focuses the light on an avalanche photodetector. The fluorescence signal path contains another short pass blocking filter (SP) before detection with a photomultiplier tube. Steering mirrors (M); shaded box indicates optical components placed on an optical bench; black dashed line represents the excitation path while black dotted line represents the detection path. *From* [46] *Reproduced under Creative Commons Attribution License (CC BY; https://creativecommons.org/ licenses/ by/3.0/legalcode)*

(ND) filter and focused on an avalanche photodiode (RCA, New York City) with a 20-mm focal length customized achromatic lens. In the fluorescence signal path, a short-pass filter (FF01- 720/SP-25, Semrock) was used to remove any leakage from the excitation light before being focused by a 100-mm focal length achromatic lens (47-972, Edmund Optics, Barrington, New Jersey) onto the photon counting detector (HPM-100-50, Becker-Hickl, Berlin, Germany) connected to a time-correlated single photon counting (TCSPC) module (SPC-150, Becker-Hickl). All imaging was performed with a horizontal line scan rate of 8 kHz and a pixel clock of

10 MHz. Confocal reflectance images were digitized at a resolution of 768 × 768 pixels with a frame rate of ~9 Hz. Fluorescence images were digitized at a 256 × 256 pixel resolution (by binning the corresponding signal to superpixels) and were averaged over 2–3 min. Customized software for real-time eye tracking (Heidelberg Engineering) was used for imaging. The confocal reflectance image served as a reference for the two-photon fluorescence signals, whose acquisition took place after correct positioning and localization. In this manner, each detected fluorescence photon could be assigned to the corresponding pixel from the reflectance image.

Two mouse strains, Thy1-YFP-16 [B6. Cg-Tg(Thy1-YFP)16Jrs/J; 6 months old male, 35–40 g, The Jackson Laboratory, Bar Harbor, Maine] and Thy1-GCaMP3 [B6.Cg-Tg(Thy1- GCaMP3)6Gfng/J; 6 months old male, 35–40 g, The Jackson Laboratory], were used for imaging. Mice were anesthetized with ketamine (100 mg/ kg) and xylazine (10 mg/kg) by intraperitoneal injection. Pupils were dilated with one drop of 1% tropicamide (Mydriacyl, Alcon Laboratories, Mississauga, Ontario, Canada) and one drop of 2.5% phenylephrine (Mydfrin, Alcon Laboratories). After dilation, a 3.2-mm plano contact lens (Cantor and Nissel, Brackley, United Kingdom) was placed on the cornea to maintain corneal hydration and compensate for most of the nonspherical refractive errors arising from the corneal surface. During imaging, the mouse was placed on a custom-built translation stage and a bite bar was used to stabilize the head for camera alignment. All experimental procedures followed the guidelines of the Canadian Council on Animal Care, and protocols were approved by the Dalhousie University Committee on Laboratory Animals.

Fluorescence lifetime maps, obtained with fluorescence lifetime imaging (FLIM), deliver additional information about cell health [48–50]. FLIM is mainly concentration independent and measures the average duration a molecule remains in an excited state. This duration is unique, providing a molecular fingerprint [51]. Changes in fluorescence lifetime reflect changes in cellular environment, such as temperature, pH, ion, and oxygen concentration [45, 52].

First, we performed imaging at two different retinal depths in Thy1-YFP-16 mice. The simultaneous acquisition of the confocal reflectance image as well as the TPE image are shown in Fig. 9.10.

Focused at the level of the retinal nerve fiber layer where axons of RGCs are located, the con-

**Fig. 9.10** *In vivo* confocal reflectance and two-photon images of the retina of a Thy1-YFP-16 mouse. (**a**) Confocal reflectance image showing mouse fundus. (**b**) Simultaneously obtained TPE fluorescence image at same transverse and axial position as in (**a**). (**c**) FLIM of (**b**) with scale bar of fluorescence lifetime (tm). (**d**) Confocal reflectance image at same transverse position as in (**a**) but

focal reflectance images visualize the mouse fundus (Fig. 9.10a), whereas the TPE images show RGCs (Fig. 9.10b). For each TPE pixel, the mean fluorescence lifetime (tm) was determined and displayed in a color-coded FLIM map with a decay time range from 2.1 ns (red) to 3.2 ns (blue) (Fig. 9.10c). The same imaging was performed ~20-μm deeper but at the same lateral position. This axial position was approximately at the level of the RGC somas (Fig. 9.10e, f). Although RGCs are clearly visualized in the Thy1-YFP-16 mouse strain (Fig. 9.10b, e), the acquisition of static fluorescence intensity measurements of RGCs does not deliver sufficient

information to discriminate functional from nonfunctional RGCs [47]. Previous work from our group has shown that, after experimental optic nerve injury, some RGCs expressing GCaMP3, a calcium indicator whose dynamics are related to changing calcium levels during neuronal action potentials, do not respond to a stimulus [19]. Dynamic fluorescence intensity imaging of these markers enables probing of cellular function of individual RGCs in response to physiologic stimuli.

Next, to determine whether imaging calcium dynamics with GCaMP3 was feasible with TPE fluorescence imaging, we imaged RGCs in the Thy1-GCaMP3 mouse strain. We performed imaging at two different power levels and integration times to determine if sufficient fluorescence signals could be generated to visualize individual RGCs (Fig. 9.11). Even at low-power levels, (close to the human use safety threshold) [9], individual RGCs were clearly visible. The quantity of fluorescence photons detected was too low to calculate an additional lifetime map.

FLIM measurements as presented in (Fig. 9.10c, f) have the potential to provide critical and complementary information about cell status [31, 53] and differentiate among RGC subtypes with different levels of vulnerability to damage. This work is currently in progress.

#### **9.4 Future Application of Two-Photon Scanning Laser Ophthalmoscopy**

Currently, most experimental models of diseases leading to RGC loss, such as glaucoma and ischemic optic neuropathy rely on qualitative or quantitative assessment of RGC loss after tissues have been processed after termination of the experi-

**Fig. 9.11** *In vivo* confocal reflectance and TPE fluorescence images of the retina of a Thy1-GCaMP3 mouse at different power levels and integration times. (**a**) Confocal reflectance image showing the mouse fundus with ~10 mW laser power. (**b**) Two-photon image acquired simultaneously with (**a**) showing individual GCaMP3 expressing RGCs; power ~10-mW and ~3-min exposure time. (**c**) and

(**d**) Reflectance and two-photon images, respectively, at the same lateral and axial location as (**a**) but with lower laser power (~3 mW); exposure time ~3.5 min. Scale bar, 50 μm. *From* [46] *Reproduced under Creative Commons Attribution License (CC BY; https://creativecommons.org/ licenses/by/3.0/legalcode)*

ment. Therefore, a longitudinal assessment of the degree or rate of RGC loss is not typically made *in vivo*, resulting in a limitation in the assessment of damage in chronic disease models.

The availability of transgenic mice that express fluorophores under the control of promoters thought to be expressed by RGCs, such as Thy1 [54], have greatly advanced the field. Several examples of characterization of RGC loss *in vivo* after experimental optic nerve diseases have been published [14, 15]. While readily available, these transgenic strains pose some limitations given that the expression profile of the promoters change after injury and encapsulation of the fluorophore by phagocytosing cells after RGC death. Hence, the specificity and accuracy of RGC loss with these strains can limit the accuracy of experiments. Even if attempts to move away from transgenic animals to make these applications more translatable, these problems exist even if the fluorophore is introduced exogenously via techniques such as virus-based transfection [17].

However, as recent evidence shows [19], structural fluorescent markers are likely inadequate markers of RGC integrity since the presence of fluorescence does not necessary indicate functional viability. The ability to dynamically image fluorescence after a light stimulus represents a significant advance. As this chapter has indicated, two-photon scanning laser ophthalmoscopy offers a real potential to scientists to study the functional impact of diseases causing RGC loss in experimental disease models. This technique also offers a powerful assay to study the impact of therapeutics.

Ultimately, if fluorescent indicators of functional activity can be safely introduced into RGCs in humans and safely imaged after visual stimulation with two-photon scanning laser ophthalmoscopy, remarkable progress would be made in the diagnosis and treatment of many ocular diseases. Indeed, single cell functional imaging of RGCs could represent one of the single-most important imagining innovations.

#### **References**


flux in neurons in response to pulsed infrared light. In: 10069, 100691B-10069–8. 2017.

54. Feng G, et al. Imaging neuronal subsets in transgenic mice expressing multiple spectral variants of GFP. Neuron. 2000;28:41–51.

**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **10**

# **Fluorescence Lifetime Imaging Ophthalmoscopy (FLIO)**

Paul Bernstein, Chantal Dysli, Jörg Fischer, Martin Hammer, Yoshihiko Katayama, Lydia Sauer, and Martin S. Zinkernagel

#### **10.1 Introduction**

Whereas sodium fluorescein, a fluorescent tracer to image and assess retinal vasculature and its integrity, is used since the 1960s [1], the intrinsic fluorescence of the human retina was first reported in the 1980s [2]. Delori were the first to record fundus autofluorescence (FAF) spectra from single retinal locations [3] and first images of FAF were recorded by von Rückmann et al. in the 1990s [4]. As lipofuscin, which accumulates in the retinal pigment epithelium (RPE) and is involved in the pathogenesis of age-related macular degeneration (AMD), was found to be a major retinal fluorophore, subsequent FAF studies addressed this disease. FAF was used to describe the progression of geographic atrophy of the RPE [5, 6], and different patterns of FAF distribution were found [7–9]. Specific fluorescence patterns

P. Bernstein · L. Sauer Moran Eye Center, University of Utah School of Medicine, Salt Lake City, Utah, USA

C. Dysli · M. S. Zinkernagel

Department of Ophthalmology, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland

J. Fischer · Y. Katayama (\*) Heidelberg Engineering GmbH, Heidelberg, Germany

M. Hammer Universitätsklinikum Jena, Jena, Germany were assigned to sub-types of AMD and to their progression rate, but these characteristic patterns did not reveal any information about the present fluorophores which might be of pathogenetic relevance. In order to distinguish fluorophores, Schweitzer et al. developed fluorescence lifetime ophthalmoscopy (FLIO), a method to measure the fluorescence decay time. FLIO is specific for fluorophores as well as their embedding matrix [10– 16] and offers high temporal resolution. The lifetime of isolated RPE was determined as τm = 273.6 ps (τ1 = 210 ps, α1 = 96%, τ2 = 1800 ps, α2 = 4%) [16].

Although fluorescence lifetime measurement is considered a relatively new technique in biomedical imaging (see Berezin and Achilefu for review [17]), it has been discovered in the nineteenth century already. In 1859 Edmond Bequerel developed the so called phosphoroscope with a time resolution of 10−<sup>4</sup> s. In the 1920s, time resolution was improved to 10−<sup>8</sup> s which enabled the first fluorescence lifetime measurements [18, 19]. However, only the availability of short pulse lasers and the introduction of time correlated single photon counting (TCSPC) [20, 21] made fluorescence lifetime measurements sufficiently sensitive for the detection of intrinsic fluorophores in living tissue. Fluorescence lifetime imaging microscopy (FLIM) evolved based on two different techniques: (1) full field illumination and the use of gated or streak cameras, an approach pursued in frequency domain technique; and (2) the

time-domain method in combination with confocal scanning laser microscopy. Specifically, twophoton excitation microscopy [22], using an inherently pulsed fluorescence excitation source, was used for FLIM investigations [23]. Whereas FLIM of intrinsic fluorophores gives detailed information on cell metabolism [24] and may detect malignant changes [25–27], the development of genetically expressed fluorescent proteins resulted in further progress in structural as well as functional imaging [28]. Another milestone in fluorescence microscopy was the introduction of Förster resonance energy transfer (FRET) enabling the detection of interaction between labeled molecules [29].

As ocular fundus autofluorescence was reported to be a possible indicator of retinal diseases [3, 30–32], Schweitzer et al. first applied fluorescence lifetime imaging to the human retina in vivo [12]. They fiber-coupled a modelocked argon-ion laser to a scanning ophthalmoscope (cLSO, Carl Zeiss, Jena, Germany) and used TCSPC for fluorescence detection. However, the lack of an image registration algorithm limited the time available for recording an image without motion artifacts to a few seconds. This resulted in images with some hundred photons per pixel only. Despite the resulting low signal to noise ratio, first fluorescence lifetime images were recorded in 2001 [11]. An offline registration of recorded images was introduced in 2002 [13], and first clinical experiments in patients with AMD were published in 2003 using a picosecond diode laser as light source [14]. Although the resolution was still low due to limited memory of the TCSPC electronics (64 × 64 pixels with a size of 80 × 80 μm2 ), the images clearly revealed a prolongation of lifetimes in AMD [15]. Extensive in vitro and histological studies were performed to identify the fluorophores seen in FAF and to measure their fluorescence emission spectra as well as lifetimes [16, 33]. Considerable progress was made with the use of the Heidelberg Retina Angiograph scanner (Heidelberg Engineering, Heidelberg, Germany), enabling an online image registration [34]. An industrially designed prototype device based on the Heidelberg Engineering Spectralis platform was first used by Dysli et al. [35]. During the last 6 years, the diagnostic potential of this device referred as "Spectralis FLIO" was systematically explored in clinical studies in Bern, Jena and Salt Lake City focusing on different pathologies. The results can be found in a series of interesting and seminal publications [35–53]. Some selected highlights of the clinical results are summarized in the Sects. 10.3–10.6 of this chapter.

Within the living human retina, many different fluorophores can be found. A previous review article about FLIO describes a variety of retinal fluorophores in detail, with a focus on natural endogenous retinal fluorophores measured with FLIM [53]. A second review article also highlights retinal fluorophores important in FLIO [48]. Additionally, a broad compilation of lifetimes of endogenous fluorophores from the literature was given by D. Schweitzer previously [54]. Therefore, we keep the presentation of these substances in this article short and focus on a compact description of the most important retinal fluorophores.

In the context of retinal fluorescence, lipofuscin is a well-known assembly of fluorophores, which accumulate within RPE cells. It is uniquely contributing to the retinal fluorescence and is very well characterized. As the dominant fluorophore at the posterior pole, it emits fluorescence with high intensity [55]. It appears in almost all phagocytes. In the RPE of human eyes it is developed through oxidative processes in the degradation of photoreceptor outer segments [56]. Lipofuscin accumulation is a general sign of cell ageing. Therefore, the amount of lipofuscin increases with increasing age [57]. This mechanism possibly also causes a prolongation of AF lifetimes [35]. Another reason for prolonged FAF lifetimes with age could also be the ageing of the lens [35]. Lipofuscin was investigated by Eldred und Katz in 1988, and further studies were conducted to better understand the constituent parts of RPE granules [58, 59]. Sparrow et al. were able to identify at least 25 different bisretinoids within these granules [60]. Lipofuscin, with an excitation maximum around 340–395 nm, shows two emission maxima (430–460 nm and 540– 640 nm) [61]. The main component of the fluorescence is emitted by the hydrophobic A2E. Maximal fluorescence is emitted at an excitation wavelength of 446 nm, and the emission maximum can be found at roughly 600 nm. It shows a mean autofluorescence lifetime of approximately 0.19 ns (τ1 = 0.17 ns, α1 = 98%; τ2 = 1.12 ns, α2 = 2%) [16]. It is assumed that A2E may damage cell membranes by releasing radicals in a photo-chemical reaction, and a relation of the molecule to the development of a variety of retinal diseases such as AMD was reported [62, 63]. However, the dominance of A2E on the development of retinal diseases has recently been discussed controversially, as it is possible that A2E iso-forms rather than A2E, or even other lipofuscin components could be involved in damaging the retina [64]. Additionally, it was shown that there is no relation between A2E and an increasing FAF intensity with increasing age [65]. Therefore, phototoxic theories regarding A2E are still under discussion [66].

Although the macula appears dark in standard intensity autofluorescence images due to the strong absorption of the blue light by the macular pigment (MP), it has been shown with FLIO [37, 50] that the MP emits a weak but measurable fluorescence with very short lifetimes. The autofluorescence lifetimes of the carotenoids lutein and zeaxanthin were determined using ex vivo FLIO imaging. Lifetimes of around 50 ps were found for the fluorescence decay of these substances (see Sect. 10.3 for details).

Different redox equivalents such as nicotinamide-adenine-dinucleotide (NADH), flavin-adenine-dinucleotide (FAD) und flavinmononucleotide (FMN) may impact FAF lifetimes, as their fluorescence properties may depend on the redox state of tissues. NADH mostly shows fluorescence in the reduced form, and NAD+ (oxidized form) shows only a very weak fluorescence [67–69]. While free NADH in vitro shows autofluorescence lifetimes around 0.4 ns, protein-bound NADH can show decay times of 1.2 up to 5 ns [68]. Time-resolved fluorescence lifetime imaging almost always aims to detect NADH, which is believed to be a sensitive method to investigate the redox state of tissue [70–74]. However, it has been discussed that a contribution of NADH to the in vivo FLIO signal is unlikely, since the short-wavelength range required to match the fluorescence excitation maximum (350 nm) cannot be used for retinal imaging due to absorption by the lens and cornea [53, 75]. Nevertheless, other studies describe autofluorescence lifetimes of approximately 1.27 ns (τ1 = 0.39 ps, α1 = 0.73, τ2 = 3.65 ns, α2 = 0.27) upon an excitation at 446 nm [16]. Therefore, the effect of NADH on in vivo FLIO measurements is still under discussion and needs further investigation.

The fluorescence of oxidized flavin FAD, located in mitochondria, is also of interest. Both, FAD as well as FMN absorb light at 450 nm wavelength and show their maximal fluorescence emission at around 530 nm; reduced forms do not fluoresce under physiological conditions [67, 68]. Typical autofluorescence lifetimes of these substances are 2.3 ns (FAD) and 4.7 ns (FMN), while protein-bound flavins show intermediate autofluorescence lifetimes (0.3–1 ns) with a weak fluorescence intensity due to quenching [68]. Skala et al. determined an autofluorescence lifetime of protein-bound FAD of 100 ps [24]. In vitro FAF lifetime investigations show flavin decay times around 2.4 ns [16].

Different components of the extracellular matrix are found in the eye. Four different types of collagen as well as elastin were previously described and are likely fluorescent [16, 69, 76–79]. The collagens (I, II, III and IV) emit fluorescence at approximately 510 nm (excitation 446 nm) [80]. Time-resolved autofluorescence investigations show different lifetimes for each type of collagen (I: 1.75 ns; II: 1.44 ns; III: 1.11 ns; IV: 1.62 ns) [16]. The autofluorescence lifetime of elastin was described to be 1.28 ns [16].

The skin pigments melanin and bilirubin emit fluorescence at a maximum of 436 nm (melanin) and 520–540 nm (bilirubin) [16, 81]. As melanin occurs not only in the iris but also in the choroid and the RPE layer, it is of special interest when investigating FAF lifetimes. The excitation maximum of melanin, however, is at 360 nm, but also excitation at longer wavelengths can yield lower but measurable fluorescence intensities [82]. Time-resolved in vitro measurements at 446 nm excitation showed mean autofluorescence lifetimes of melanin powder at 916 ps (τ1 = 280 ps, α1 = 70%; τ2 = 2.4 ns, α2 = 30%) [16]. Considerably shorter lifetimes were determined for melanin dissolved in PBS (τ1 = 0.03 ns, τ2 = 0.62 ns, τ3 = 3.24 ns) [83].

The aromatic amino acids tyrosine, phenylalanine and tryptophan also fluoresce, but the excitation lies within the ultra-violet range (260–295 nm), and the emission maxima can be found between 280 and 350 nm [68]. Therefore, a fluorescence detection of these substances is unlikely with FLIO imaging.

Protoporphyrin IX, occurring within the cytochrome-c complex in mitochondria and as a byproduct in the synthesis of hemoglobin, shows fluorescence at an emission maximum of 635 nm, with an autofluorescence lifetime in the nanosecond range [67]. Its detection was discussed in relation with tumorigenesis, as the amount of protoporphyrin may increase in proliferative tissue [23, 84, 85].

Pathological fluorophores, such as advanced glycation end products (AGEs), also show fluorescence at the retina [16, 86]. They are discussed within the disease-related sections of this manuscript.

#### **10.2 Technical Realization Based on the Spectralis Platform**

While fluorescence lifetime imaging microscopy is already a well-established imaging technology, the transfer of this modality to in vivo imaging of the retina in ophthalmology involves several challenges. Examples are the low fluorescence yield at limited excitation power as well as unpredictable eye movements. However, with the Spectralis FLIO, a Spectralis variant which is released for clinical studies, these challenges have been solved. Within an acquisition time of typically 120–180 s reliable and reproducible in vivo FLIO data can be acquired on a patient eye with a dilated pupil.

The FLIO systems used in these clinical studies were developed and assembled on the basis of the Spectralis platform, a well-established imaging platform, which is widely used for multicolor, fluorescence (angiography and autofluorescence), and OCT imaging on a daily basis in clinical routine. Recently, the Spectralis modalities were extended with a high resolution OCTAmode (OCT angiography), which allows to represent the vascular plexus in three dimensions without the invasive injection of a fluorescence dye (see Chap. 7 and references for further details).

In the Spectralis OCT a second completely independent scanning system is implemented for the OCT path in order to actively track and compensate for eye movements, which were detected by online processing of the continuously acquired SLO images and fed back to the OCT scanner control electronics. In contrast, in the Spectralis FLIO system both lasers, the infrared laser for the reference image and the picosecond laser for fluorescence excitation, are delivered to the retina by the same SLO scanning system. The eye movements are detected by online image processing of continuously acquired near infrared reference images, and the spatial assignment of each detected fluorescence photon is based on the preceding detected eye position. If an eye movement is detected within two consecutive reference images, the photons which were detected within this period and thus cannot unambiguously be assigned to a spatial position, are rejected.

The FLIO set-up is schematically represented in Fig. 10.1. The blue (470 nm) picosecond laser pulses are superimposed by means of beam splitter to the continuous, near infrared (815 nm) reflection laser and both beams are simultaneously deflected in X and Y directions in the SLO scanning unit, in a way that the retina is scanned line by line. The field of view for FLIO images is usually 30° × 30°, corresponding to an area of approximately 8.9 × 8.9 mm2 for emmetropic eyes. The repetition rate of the FLIO laser pulses

**Fig. 10.1** Left part: schematics of the optical set-up of the FLIO camera head and laser & detection unit. See text for detailed explanation. Right part: scan algorithm of IR tracking laser with superimposed picosecond laser pulses.

Green resp. yellow circles represent photons detected in the short wave detection channel (SSC) resp. long wave detection channel (LSC)

is 80 MHz, i.e. the pulse separation of two consecutive pulses is 12.5 ns. The pixel clock for standard Spectralis SLO images is 10 MHz (100 ns pixel separation) in the high speed mode (HS-mode), which is normally used for the FLIO application. This means, that for each HS pixel of the infrared SLO image about 8 laser pulses are applied; the pulse width of each single pulse is in the order of 70–100 ps.

The fluorescence light as well as the back scattered infrared light originating from the focus volume is then travelling the same optical path backwards, is "descanned" to a stationary beam, and deflected by another beam splitter towards the detection arm. A multimode fiber with core diameter of 100 μm serves as spatial aperture, resulting in a z-transfer point spread function of about 2 mm (FWHM). This confocal set-up provides an efficient suppression of out of focus light; in particular it efficiently blocks the intrinsic fluorescence of the lens tissue. If there was no efficient blocking of the strong fluorescence of the lens tissue with a mean lifetime of >2 ns, it would, for example, not be possible to measure the faster (≈50 ps) picosecond decay time of the weaker fluorescence of macula pigment. It cannot be completely excluded that for patients with very dense cataract, where on one hand the lens fluorescence signal is strong and on the other hand the retina signal is reduced due to the double-pass through the strongly scattering lens tissue, the result of the lifetime measurement of the retinal fluorophores may be biased due to the contribution of the lens fluorescence. Further research is required to quantitatively assess this influence, e.g. by systematically measuring patients before and after intraocular lens (IOL) implantation.

The signal light is then guided via the multimode fiber out of the camera head and launched to the external detection unit. The fiber output is collimated and the beam is filtered by a blocking filter (F1) to remove back scattered light of the excitation laser at 470 nm. The back scattered infrared (IR) light is separated from the fluorescence photons by means of beam splitter plate (BSP 1) and then focused on an avalanche photodiode (APD) with high quantum yield in the near IR range. The APD-signal is digitized to 8 bits and then transferred to the computer. The realtime image processing algorithm calculates the correction parameters in form of an affine transformation, describing the eye movements with respect to the first frame during the measurement.

The fluorescence light path is split (BSP 2) into two branches: the short spectral channel (SSC: ≈498–560 nm) and the long spectral channel (LSC: ≈560–720 nm). Finally, the fluorescence photons are detected by two time-correlated single photon counting (TCSPC) detection units, each consisting of a hybrid detector, which combines a highly sensitive GaAsP photocathode with an avalanche-like multiplication of the photo-electrons as used in standard APD detectors and ultrafast read-out electronics. The time resolved measurement works as follows: each picosecond laser pulse triggers the start of two electronic time clocks, one for the short wave channel and one for the long wave channel. The time clock is stopped either by the detection of a fluorescence photon in the corresponding detection channel, or by the consecutive picosecond laser pulse triggering a new measuring period. Each detected photon is then assigned to a spatial location (XY-pixel) by considering the preceding result of the eye movement detection algorithm and is provided with a time stamp, indicating the elapsed time between laser trigger and detection of the photon.

The maximum laser power is 300 μW, which corresponds at 80 MHz laser repetition to a pulse energy of 3.75 pJ. Since the laser is modulated off during the resetting periods of the horizontal and vertical scanners, only a mean average power of 200 μW is measured in front of the objective and applied to the patients retina. For a rigorous laser safety classification corresponding to the IEC standard 60825-1:2014 the action of a single pulse (class 1 limit in the standard: 77 nJ) as well as the accumulative action of consecutive pulses within a defined angular subtense of (1.5 mrad)2 must be assessed for different time regimes according to the three rules described in Sect. 4.3.f of the IEC standard. For all considered cases, the accessible emission for the FLIO system (AEFLIO) is below 1% of the corresponding accessible emission limit for class 1 systems (AELclass 1). Thus, the Spectralis FLIO is classified as class 1 laser product and is safe for examinations on humans. See also the published laser safety assessment in Ref. [37] (still referring to the IEC standard 60825-1:2007 with slightly different limits) and in the supplement of Ref. [39].

It might be of interest to compare the FLIO laser exposure with the exposure during fluorescence angiography (FA) with the standard Spectralis. The laser power of the Spectralis FA-mode is ≈1.4× higher compared to the FLIO. However, since the laser wavelength 470 nm is ≈2.1× more critical as the wavelength 486 nm used in the Spectralis FA mode (IEC Standard: factor C3 is 2.51 for 470 nm and 5.25 for 486 nm), the accessible emission of the FLIO mode is according to the Standard slightly higher (≈1.5×) but comparable to the exposure of the Spectralis FA-mode.

Since the maximum count rate of the detection units is 10 MHz, the laser power applied to the patient's eye often has to be reduced by adding neutral density filters just in front of the singlemode fiber coupler, in order to avoid an overload of the detector.

During the examination for each XY pixel position a histogram is build up, and the number of photon counts is sorted against their arrival time with respect to the precedent laser pulse. The Spectralis FLIO acquisition software bins 3 × 3 high speed pixels (in total 768 × 768 pixels acquired at 10 MHz pixel rate for IR reference image) to one FLIO super pixel, so that the FLIO images consists of 256 × 256 pixels. Thus, a mean detection rate of typically 2 MHz results in the collection of about 3 × 108 Photons during 150 s (typical examination time is 2–3 min), resulting in a mean number of about 4500 photons per FLIO pixel. The measurement usually is stopped after about 1000 photons/histogram have been collected in the darkest pixels within the area of interest (typically within the macula), since this minimum number of counts still allows for evaluating the mean lifetime with sufficient accuracy. Finally, the acquired pixel histograms are saved to the hard drive.

Different software can be used to approximate the autofluorescence decay, such as FLIMX [87] or SPC Image (Becker & Hickl GmbH) [88], the latter being the most commonly used software in FLIO investigations. The photon arrival decay in each of the 65,536 pixels is hereby approximated according to:

$$\frac{\mathbf{I}\begin{pmatrix} \mathbf{t} \end{pmatrix}}{\mathbf{I}\_0} = \mathbf{I} \mathbf{R} \mathbf{F} \otimes \sum\_{i} \mathbf{a}\_i \bullet \mathbf{e}^{-\frac{\mathbf{t}}{\varepsilon\_i}}$$

Here ⨂ indicates the convolution integral with the instrument response function (IRF). Biand tri-exponential decays were previously used to investigate the retinal fluorescence in vivo. A tri-exponential approach leads to three different lifetimes (τ1, τ2, and τ3) as well as three corresponding amplitudes, a bi-exponential approach leads to two different lifetimes (τ1 and τ2) as well as two corresponding amplitudes. The amplitudes represent the contribution of each component to the total fluorescence decay.

As the FAF in vivo typically does not completely decay within 12.5 ns (time between two excitation pulses upon 80 MHz laser repetition rate), the application of an incomplete multiexponential decay mode is standard. The digital time resolution is given by the temporal width of the histogram binning and is 12.2 ps (interval between two laser pulses 12.5 ns divided by number of channels 1024). However, since at least three supporting points are necessary for the reconstruction of a decay, the shortest detectable lifetimes are about 30 ps [37]. For further data analysis, image processing, and especially to average FAF lifetimes over certain regions of interest, different software packages have been used. Common is the FLIO-reader (ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland) and the software FLIMX (Institute of Biomedical Engineering and Informatics, Technische Universität Ilmenau, Ilmenau, Germany) [87].

#### **10.3 Clinical Applications I: The Healthy Eye**

Retinal autofluorescence imaging is a commonly used tool in ophthalmology for clinical investigation and research. In addition to fundus autofluorescence intensity imaging, which derives contrast mainly from lipofuscin and its derivatives, fluorescence lifetime imaging has the potential to provide additional contrast, as fluorescence lifetimes are largely independent of the fluorophores concentration and intensity. In the last decade, fluorescence lifetime imaging has gathered momentum through the research efforts and findings of several groups, and has contributed to the understanding of the pathophysiology of the healthy eye as well as retinal disorders ranging from degenerative diseases to retinal dystrophies and other diseases that affect the retina.

The FLIO signal in healthy eyes is very well established and has been shown to be highly reproducible between different groups. In the 30° FLIO field centered at the macula, shortest lifetimes can be found right at the center of the fovea. Intermediate lifetimes can be detected all across the retina and longest decay times can be found at the optic disc. Schweitzer et al. first published this typical pattern using an experimental FLIO device before it was coupled to a Spectralis [15]. The group of Dysli et al. in Bern [35] were the first to show this pattern in healthy eyes using the Spectralis FLIO, followed by Sauer et al. in Jena as well as Klemm et al. in Ilmenau. In the process of investigating the FLIO signal in healthy eyes, it was possible to show an individual fluorescence of the macular pigment. Here this finding is highlighted.

#### **10.3.1 Macular Pigment**

In the healthy eye, the anti-oxidative carotenoids lutein, zeaxanthin and meso-zeaxanthin accumulate in the center of the macula [89, 90]. Thus, they are called macular pigment (MP). Whereas the amount lutein and zeaxanthin in the MP depends in part on dietary intake, mesozeaxanthin is formed inside the retinal pigment epithelium from lutein by the RPE [91]. Based on the existence of highly specific binding proteins at the area of the fovea, MP accumulates at an area of 0.5 mm in diameter at the posterior pole of the eye [92–98]. These carotenoids are most concentrated in the foveal Müller cells and the Henle fiber layer, but also in other inner retinal layers [99, 100].

MP is believed to protect the eye from light damage, especially within the blue-light range around 460 nm [92, 101, 102]. MP may absorb blue light, which is potentially photo-toxic, before it reaches the photoreceptor layer; it may also quench free radicals [100, 103–105]. Levels and distribution can be altered in different retinal diseases such as macular telangiectasia type 2 (MacTel) as well as age-related macular degeneration (AMD). In healthy eyes, different distribution patterns have been described. MP can show a slim cone-like distribution, a broader plateau or even ring-like distributions within the 1 mm ring (Fig. 10.2).

On fundus autofluorescence intensity images, MP appears as a dark spot in the center of the fovea because of its absorption of the blue excitation light [106]. Therefore, it was believed to only absorb blue wavelength light and not show any fluorescence characteristics. Based on this assumption, dual wavelength autofluorescence at blue (absorption) and green (non-absorption) wavelength light are used to calculate individual amounts of MP. However, a resonance-Raman-based study demonstrated that MP shows fluorescence characteristics, albeit with low quantum efficiency [107, 108]. As fluorescence lifetimes are independent of the fluorescence intensity, FLIO can also detect the fluorescence of carotenoids [37]. This was first described by Sauer et al., who found a strong correlation between mean fundus autofluorescence lifetimes and the amount of MP [37]. The short autofluorescence lifetimes at the fovea, depicted in red color in FLIO images, were therefore attributed to retinal carotenoids (Fig. 10.2).

A further study investigated autofluorescence lifetimes in patients with macular holes [44].

**a b c**

Interestingly, the distribution of MP and short autofluorescence lifetimes were identical. In the center of macular holes, where no MP could be localized, the short autofluorescence lifetimes were absent. These were found adjacent to the macular hole, corresponding to the MP distribution. This study also describes a follow-up of patients before and after successful vitreoretinal surgery with closure of the macular hole [44]. It was reported that short autofluorescence lifetimes corresponding to the macular pigment, migrate back to the fovea as the macular hole was closed. Furthermore, the increase of short autofluorescence lifetimes in the fovea after surgery was described to result in a better visual outcome for the patients.

A different FLIO study investigated fundus autofluorescence lifetimes in patients with albinism [50]. These patients usually do not have a foveal depression, which is often described as foveal hypoplasia. Interestingly, these patients do not have macular pigment accumulation at the posterior pole. Two patients were examined, and the albinism was electro-physiologically confirmed based on reversed pattern onset of visual evoked potentials (VEP) across the occipital scalp, which is indicative of optic misrouting associated with albinism. Only one of the four investigated eyes showed small amounts of MP, consistent with a small area of short autofluorescence lifetimes. The other three eyes did not show any macular pigment, and short autofluorescence lifetimes were absent from the fovea.

Finally, ex vivo measurements were performed on the carotenoids lutein and zeaxanthin [50]. These measurements confirm the weak fluorescence at short lifetimes of carotenoids with FAF lifetimes of 50 ps for lutein and 60 ps for zeaxanthin in the SSC. Furthermore, it was reported that in combination with binding proteins, autofluorescence lifetimes showed prolonged means.

Overall, FLIO will probably not replace existing methods for MP measurement, but it may give additional interesting insights about it. Furthermore, to investigate MP with FLIO, there is no need for a reference region to calculate the amounts of MP. This may be especially helpful in diseases such as retinitis pigmentosa and choroideremia, as well as in geographic atrophy in endstage AMD. Here, retinal degenerations affect the reference areas of the MP measurement and calculation of the MP amount is therefore not possible. FLIO may fill the gap to assess MP in these patients, where other methods fail to give reliable measures.

#### **10.4 Clinical Applications II: AMD and Retinal Dystrophies**

#### **10.4.1 Age-Related Macular Degeneration**

Age-related macular degeneration (AMD) is one of the major causes of vision loss in the elderly population [109]. Besides genetic predisposition, several other factors such as diet, hypertension, arteriosclerosis, elevated serum lipids, smoking, alcohol abuse, and exposure to ultraviolet light have been identified [110–114].

Several stages of AMD can be identified: early, intermediate and advanced AMD [115, 116]. Hallmarks of AMD are retinal drusen and retinal pigment epithelium (RPE) abnormalities like hypo- or hyperpigmentation. Several subgroups of drusen exist such as soft drusen, hard drusen, cuticular drusen, crystalline drusen, and reticular pseudodrusen [110, 117]. Retinal drusen are focal deposits of extracellular debris situated between the basal lamina of the RPE and the inner collagenous layer of Bruch membrane. Soft drusen consist of lipid rich material and other constituents such as zinc, oligosaccharides, amyloid, apolipoproteins, and complement factors [118, 119]. Reticular pseudodrusen have similar constituents as soft drusen [110, 120]. The advanced stage of AMD is characterized by either geographic atrophy of the RPE involving the foveal center or neovascular maculopathy or a combination of both.

Major changes in several fluorophores have been identified in all stages of AMD. The most important change is accumulation of lipofuscin within the RPE, and in geographic atrophy absence of lipofuscin [110]. Fundus autofluorescence (FAF) has been used to quantify lipofuscin in the RPE, and identify areas of geographic atrophy which appear as hypo-fluorescent lesions in FAF.

In the past decade, several studies have investigated fluorescence lifetime patterns in various stages of AMD using FLIO. Fluorescence lifetime measurements derive their signal not only from lipofuscin but also from many other endogenous fluorophores such as visual cycle end products like retinal derivates, and therefore offer the ability to resolve retinal abnormalities at early stages of the disease. Most of these studies employed a modified Spectralis system (Heidelberg Engineering) using a 470 nm pulsed laser (80 MHz; <100 ps pulse width) and highly sensitive hybrid detectors for time correlated single photon counting. Two separate channels are used: A short spectral channel (SSC; 498–560 nm) and a long spectral channel (LSC: 560–720 nm).

Recent studies have shown that mean retinal autofluorescence lifetimes of the macula in patients with AMD are generally significantly prolonged [15, 41, 43]. This prolongation was found to occur in a ring-shaped manner, particularly visible in the LSC with colors set from 300 to 500 ps [51]. If color ranges are set differently, the pattern may be difficult to be observed. The prolongation occurs in the area between the large arcade vessels and is most pronounced at the nasal and temporal macula. Figure 10.3 shows the pattern. The pattern may indicate the first sign of AMD, as a trace pattern was also found in patients at high risk to develop the disease.

Areas of retinal drusen, however, are heterogeneous and can have shortened, normal or prolonged autofluorescence lifetimes. This may be a result of different forms of AMD. A study that investigated drusen in non-exudative AMD only

**Fig. 10.3** FLIO and FAF intensity images from two healthy individuals and two patients with AMD. (Reprinted from [51])

**Fig. 10.4** Fundus autofluorescence imaging in a patient with geographic atrophy due to age-related macular degeneration. Fundus autofluorescence image (FAF) and color coded image of fluorescence lifetime imaging oph-

thalmoscopy (FLIO). The lifetimes within the atrophy are prolonged; however, in the foveal center short lifetimes persist. These short lifetimes have been correlated with the presence of macular pigment

did not find shortened autofluorescence lifetimes [51]. However, drusen with very short fluorescence lifetimes may represent newly formed deposits [40]. Longitudinal studies are needed to show whether fluorescence lifetime features change within individual drusen over time and whether this may help to identify newly formed drusen. In intermediate AMD, areas of intraretinal hyper-reflective deposits (possibly melanolipofuscin) display long lifetimes whereas deposits within the photoreceptor outer band display relatively short lifetimes.

In patients with geographic atrophy due to advanced AMD, fluorescence lifetimes are significantly prolonged both in the SSC and the LSC within areas of atrophy (Fig. 10.4) [41, 49]. In the border zone of geographic atrophy, where characteristic FAF distribution of hyper-fluorescence can be observed [121], distinct patterns of only marginally prolonged fluorescence lifetimes were observed [41]. In many cases with geographic atrophy, areas with very short fluorescence lifetimes can be observed in the foveal area. These short lifetimes might originate from residual macular pigment within the outer nuclear and plexiform layers [37]. A correlation between these short lifetimes in both spectral channels in the fovea and BCVA was shown [41]. Short fluorescence lifetimes within the macular center therefore may provide useful information about the integrity of the foveal photoreceptors.

Neovascular AMD as the second form of advanced AMD is characterized by the presence of choroidal neovascularization (CNV). According to the localization of the CNV complex neovascular AMD can be classified into type 1 (below the RPE) or type 2 (above the RPE) CNV [116]. In pilot studies, there was only minimal contrast of the CNV complex in fluorescence lifetime imaging (unpublished data). Areas of CNV display only slightly prolonged fluorescence lifetimes. In the active stages of this disease, hyporeflective areas of intra- or subretinal fluid can be seen in optical coherence tomography (OCT). These areas of intra or subretinal fluid are not directly identifiable in fluorescence lifetime maps acquired by FLIO (unpublished data).

#### **10.4.2 Retinal Dystrophies**

Fundus autofluorescence intensity measurement has emerged as one of the key tools for noninvasive retinal imaging in retinal dystrophies for diagnostic purposes as well as for follow-up examinations [122]. In association with upcoming genetic specification and differentiation as well as emergent trials addressing genetic modification in retinal dystrophies [123–125], imaging modalities to record subtle changes in retinal metabolism and structures are essential.

Retinitis pigmentosa summarizes a genetically heterogeneous group of degenerative retinal diseases with different inheritance patterns and penetrance. Progressive rod followed by cone dysfunction clinically leads to primary night blindness and progressive constriction of the visual field [126]. In FAF intensity measurement a hyperfluorescent ring may be identified, delineating the border between morphologically intact retinal layer structure and altered outer retinal layers [127]. Using fluorescence lifetime imaging, ring characteristic structures with specific grades of degeneration can be identified: intact retina, photoreceptor atrophy, and combined photoreceptor and RPE atrophy (Fig. 10.5a) [47, 52]. This lifetime pattern may allow for more differentiated clinical assessment of the patients and their follow-up examination over time.

Stargardt disease is the most common monogenetic juvenile retinal dystrophy, inherited in an autosomal recessive pattern [128]. Due to a mutation in the ABCA4 gene, coding for the ABCA4 transmembrane transporter in the photoreceptor outer segments, visual cycle byproducts accumulate, leading to progressive dysfunction and destruction of the outer retina and the RPE [129]. Clinically, progressive accumulation of yellowish retinal deposits is visible which appear as hyperfluorescent flecks using FAF intensity measurement. In advanced disease stage, RPE atrophy manifests as hypoautofluorescence in FAF intensity images. Fluorescence lifetime measurement in Stargardt disease revealed that hyperfluorescent flecks may feature shorter or longer lifetimes compared to the surrounding retina (Fig. 10.5b) [40]. In a subgroup of patients, areas and flecks with shorter fluorescence lifetimes have been identified even before they were visible in the FAF intensity measurement. Over time, they appear in FAF and the fluorescence lifetimes become gradually longer. In areas of RPE atrophy, generally prolonged fluorescence lifetimes were observed.

Choroideremia is a rare monogenetic retinal dystrophy with an x-linked inheritance pattern, thus affecting mainly young male subjects [130]. It is caused by a mutation in the CHM gene, coding for a protein responsible for membrane trafficking in the retina and the RPE. Clinically, progressive degeneration of the choroid, the RPE and the neurosensory retina is observed, leading to progressive impairment of visual function, and finally complete blindness [130]. Whereas FAF intensity sharply delineates the borders of the

**Fig. 10.5** Fundus autofluorescence imaging in patients with retinal dystrophies. (**a**) Patient with retinitis pigmentosa, (**b**) patient with Stargardt disease, (**c**) patient with

choroideremia (*FAF* fundus autofluorescence intensity image, *FLIO* fluorescence lifetime imaging ophthalmoscopy)

RPE, FLIO provides additional information within the area of RPE atrophy (Fig. 10.5c) [40]. Thereby, areas with remaining RPE feature the shortest lifetimes, followed by RPE atrophy and remaining photoreceptor layers in OCT, and complete atrophy of RPE and the outer nuclear layers with the longest fluorescence lifetimes. Follow-up examinations in patients with choroideremia have shown that FLIO is a sensitive tool to monitor subtle changes of retinal degeneration over time.

In summary, FLIO enables detection and identification of early disease associated changes on the level of the RPE, the outer retinal layers, and possibly also the choroid. In Stargardt disease, retinitis pigmentosa and choroideremia we have shown that FLIO provides supplementary information in addition to commonly used standard imaging modalities such as FAF intensity measurement, color fundus imaging, and OCT.

#### **10.5 Clinical Applications III: Macula Telangiectasia**

#### **10.5.1 Macular Telangiectasia**

Macular Telangiectasia type 2 (MacTel) is an inherited retinal disease with an onset of about 40–60 years, but cases of younger patients have also been reported [131–133]. The youngest affected patient was diagnosed at the age of 21. Although patients usually do not proceed to legal blindness, vision is often significantly disturbed. Patients initially report metamorphopsia and difficulties with reading, which proceeds to disturbance in the vision affecting the daily life [134]. MacTel affects an oval-shaped area of approximately 5°–6° from the foveal center. Macular pigment can be found around the MacTel area with an eccentricity of 5°–9° instead of in the central fovea [101, 135–138].

Initially, it was believed that MacTel is a rare disease, but as researchers find out more about the disease characteristics and doctors learn to distinguish MacTel from other retinal diseases such as AMD, it is believed that the disease prevalence is much higher than initially assumed [139, 140]. So far, despite intensive research, no causative gene has been found for MacTel but a dominant genetic inheritance with reduced penetrance is likely [141–145]. To truly distinguish it from other retinal diseases such as AMD which may show similar features to those of MacTel, retinal imaging plays a very important role. Several imaging modalities have been used to describe features of MacTel [131, 146–148]. Fundus photography may show retinal greying, a feature that is often difficult to truly distinguish. In OCT imaging, retinal cysts and ellipsoid-zone loss may be observed at the temporal side of the fovea [149, 150]. However, these cysts may also in some cases be found at the nasal side, or may be absent especially in early disease stages. Furthermore, these cysts may be mistaken as changes caused by neovascular AMD. Blue-light reflectance imaging was described to show changes related to MacTel, but also current results are not satisfying. Autofluorescence imaging is often relatively normal in early stages, except for the decrease of the central hypofluorescence, leading to a hyperfluorescent macular area. Macular pigment levels are often reduced in initial stages of the disease and can evolve to ring-like distributions in later stages [151]. Fluorescein angiography is usually able to show leakage indicative of MacTel, but non-invasive imaging modalities would be preferred. Recently, FLIO has emerged as a novel and non-invasive tool to detect changes in MacTel with extremely high contrast [45]. It highlights the MacTel area in affected individuals, and especially in early stages, shows a temporal, crescent shaped prolongation of FAF lifetimes (Fig. 10.6). Especially the SSC in FLIO imaging seems to highlight MacTel-related changes. A standardized grid (ETDRS grid) was used to characterize different areas of the fundus; the area corresponding to the MacTel region (T1) showed significantly prolonged FAF lifetimes as compared to the reference region (T2), whereas in healthy eyes, T2 showed slightly longer fluorescence decays. Based on this finding, a ratio was established to quantify definite MacTel from definite healthy. If T2/T1 is larger than 1.0, the person is likely healthy and a ratio below 0.9 (in combination

**Fig. 10.6** FLIO and FAF intensity images for a healthy person and a MacTel patient

with typical FLIO findings) likely indicates MacTel. This ratio, as well as the images obtained with FLIO, may help to identify patients with MacTel. Furthermore, FLIO may also indicate affected individuals in a stage where they clinically still show a healthy fundus exam [45]. This was already shown for clinically unaffected parents of MacTel patients and is currently being investigated for the second generation (children of MacTel patients, unpublished data). The earliest changes visible with FLIO, however, seem to be not at the temporal side of the fovea but rather superiorly, again presenting as prolonged FAF lifetimes.

Over all, FLIO is a novel tool for the detection of MacTel and likely gives the best contrast of all non-invasive imaging modalities. It highlights the MacTel area and may be capable of indicating MacTel-related retinal alterations at the earliest stages. Although the availability of FLIO is currently still limited, it is likely to emerge as a very helpful tool for the detection of MacTel, especially at its earliest stages.

#### **10.6 Clinical Applications IV: Diabetic Retinopathy**

Diabetic retinopathy is a micro-vascular complication in diabetes [152]. Micro-vasculopathy and inflammation finally result in neuronal degeneration [153] and a breakdown of the blood–retina barrier (BRB), causing retinopathy and macular edema [154]. As hyperglycemia is a primary event in diabetes, this causes not only an impairment of the vascular endothelium but also a general protein glycation. This formation of advanced glycation end products (AGEs) in the non-enzymatic Maillard reaction of proteins with glucose and other sugar molecules is involved in BRB breakdown [154]. In addition, endothelial dysfunction and the protein-kinase C pathway may play a role [155]. As protein glycation is a process generally taking place in ageing tissue and predominantly affecting long-living proteins, it is greatly enhanced in diabetes mellitus [156]. It correlates with the level as well as the duration of hyperglycemia [157]. Protein glycation comprises several steps. First, Schiff's bases are formed in a reaction of the Aldehyde- and Ketone groups of sugars with amino groups of the proteins. This is followed by the Amadori rearrangement finally resulting in the AGE. This is further enhanced by highly reactive Carbonyl groups of intermediates, such as α-Oxoaldehyde, Glyoxal, and Methylglyoxal [158]. These intermediates are not only generated by the Maillard reaction but also by other pathways such as auto-oxidation of sugars and glycolysis [159]. Oxidative end products, such as Pentosidin and *N*-Carboxymethyllysin, as well as non-oxidative AGEs (e.g. Hydroimidazolone and Pyrralin) are distinguished [158, 160]. A well-known AGE is the glycated Hemoglobin HbA1C which is used clinically for long-term monitoring of diabetes [156, 161–163]. Upon hyperglycemia, AGEs accumulate in the lens, the cornea, the vitreous, and the retina of the eye. Thus, AGEs contribute to diabetic retinopathy in different ways. They may damage the vascular endothelium and, subsequently, also affect the pericytes. This leads to a disruption of the BRB and can result in diabetic macular edema, one of the most sight-threatening complications of diabetes [164, 165]. Furthermore, AGEs have procoagulant potential contributing to capillary occlusion which is typical for diabetic retinopathy [158]. However, neuronal cells are also directly affected [166]. Animal experiments showed AGE deposition in the vascular as well as in the neuronal compartment of the retina [160]. Protein crosslinking, namely the covalent binding of Lysine residues, alters the tertiary structure of proteins and, thus, impairs their function. However, the modified proteins are able to bind to receptors for advanced glycation end products (RAGE) which is expressed by various cell types such as macrophages, monocytes, endothelial cells, glial cells, and neurons. Besides inflammatory reactions [167], this results in the secretion of cytokines, adhesion molecules, and growth factors like vascular endothelial growth factor (VEGF) [164]. VEGF stimulates neovascularization, which is the diagnostic criterion for proliferative diabetic retinopathy. Finally, the activation of the RAGE may exert oxidative stress to the cells by generation of reactive oxygen species (ROS) leading to neuronal cells death [168].

Changes of FLIO lifetimes in healthy subjects with different states of glucose were previously described by Klemm and coworkers [169]. Additionally, AGEs seem to show a fluorescence, and their concentration in serum was found to increase with the severity of diabetic retinopathy [170]. As increased fundus autofluorescence (FAF) has been found in diabetic macular edema in association with decreased macular sensitivity [171], Schweitzer et al. [38] and Schmidt et al. [46] investigated fluorescence lifetimes in diabetic patients.

Schweitzer et al. [38] compared the fluorescence decay upon a 448 nm excitation for a group of 48 patients suffering from type 2 diabetes without retinopathy to 48 healthy control subjects of same age. They found a general prolongation in the fundus autofluorescence lifetimes in diabetic eyes. Using a three-exponential fit of the decay and a sophisticated statistical procedure, they revealed a good discrimination of both groups with a sensitivity of 73% and 70% as well as a specificity of 84% and 64% for the two spectral channels (490–560 nm and 560–700 nm) respectively for the mean fluorescence lifetime τm. The best discrimination, however, was achieved by the intermediate decay time component τ2 at 490–560 nm (sensitivity 84%, specificity 76%), which the authors assigned to fluorophores in the retina. They discuss this as potentially being a result of reduced protein binding of FAD, as well as protein glycation that may lead to an accumulation of AGEs. In a subgroup analysis, they found a considerably better discrimination in phakic patients and controls as in pseudo-phakic eyes. Thus, they concluded an influence of the lens fluorescence on the measurements at the fundus despite the use of a confocal scanning laser system. This is due to the extremely strong fluorescence emission from the lens. An accumulation of AGEs in the lens is well known [172], this could in part account for the prolonged lifetimes measured in diabetic retinas.

Schmidt et al. [46] extended this study to patients with diabetic retinopathy. They compared fluorescence lifetimes upon excitation at 470 nm in 34 patients suffering from nonproliferative diabetic retinopathy (NPDR) with that of 28 age-matched healthy controls. An example of a patient with diabetic retinopathy is given in Fig. 10.7. Fluorescence lifetimes were recorded in the macula and at two concentric annuli given by the standard ETDRS grid (Fig. 10.7, middle left), data from a threeexponential fit of the decays was used.

Consistent with Schweitzer et al., they showed increased lifetimes in the patient group in all investigated retinal fields (Fig. 10.8). This holds true for both spectral channels, however, was more pronounced in the SSC (498–560 nm, p ≤ 0.002) than in the LSC (560–700 nm, p < 0.05). A ROC analysis using a logistic regression model resulted in a sensitivity of 90% and a specificity of 71% for the discrimination of NPDR patients. In contrast to Schweitzer et al., Schmidt et al. found the best discrimination for the long-living fluorescence component τ<sup>3</sup> instead of τ2. This might result from the longer excitation wavelength used. Again, the formation

**Fig. 10.7** Mean funds autofluorescence (FAF) lifetime images (FLIO) from two spectral channels, as well as and FAF intensity images from the retina of a healthy control (left) and a diabetic retinopathy patient (right). Middle left panel comprises a standardized ETDRS grid

of AGEs in neurons, vascular, and glial cells was discussed as source of the prolongation of lifetimes. This was corroborated by FLIO measurements at the lenses of the subjects. These showed shorter lifetimes in the patients, again predominantly in the SSC. As AGE (bovine serum albumin incubated with glucose) showed a decay time of 1.7 ns and an emission maximum of 523 nm [16], its accumulation must increase the physiologically shorter fundus autofluorescence lifetime, but decrease that of the lens which is known to be longer in healthy state. The assumption of AGEs as source of an additional fluorescence from the ocular fundus is corroborated by the finding of a correlation of the abundance of the intermediate lifetime component with the HbA1C value of the patients (SSC: p = 0.009 and LSC: p = 0.016).

In conclusion, these investigations indicate that FLIO has the potential to show protein glycation as well as alterations in coenzymes of the cellular energy metabolism that are associated with diabetes. This might help elucidate pathways leading to diabetic retinopathy and, thus, provides opportunities for differential diagnostics with the option of individualized therapy.

#### **10.7 Conclusion and Summary**

FLIM (fluorescence lifetime imaging microscopy) is a well-established technology in the field of microscopy, which provides additional information on the temporal characteristics of the fluorescence decay. In the last 10 years, this technology has been transferred to be used in ophthalmology (FLIO), aiming for a better understanding of the nature of the endogenous fluorophores within the retina and their role and changes during the evolvement of retinal pathologies. The FLIO modality was evaluated in several investigational studies on patients with different retinal diseases and it has been shown, that reliable and reproducible data can be acquired in a clinical setting. In many aspects a good correlation of FLIO data with other existing imaging modalities was shown, and in some diseases the lifetime contrast could provide an earlier or more reliable diagnosis and a finer grading of the stage of the disease compared to standard autofluorescence imaging or OCT modality.

#### **References**


copy (FLIO) – a novel way to assess macular telangiectasia type 2 (MacTel). Ophthalmol Retina. 2018;2(6):587–98.


nent of lipofuscin. Invest Ophthalmol Vis Sci. 1999;40(3):737–43.


of the human retina in healthy volunteers. In: SPIE BiOS. SPIE; 2016.


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

#### © The Author(s) 2019 237 J. F. Bille (ed.), *High Resolution Imaging in Microscopy and Ophthalmology*, https://doi.org/10.1007/978-3-030-16638-0\_11

# **Selective Retina Therapy**

and Ralf Kessler

Boris Považay, Ralf Brinkmann, Markus Stoller,

### **11.1 Retinal Therapy: A Short Historic Overview**

Retinal light based therapy has been discovered in the late 1940s by using sunlight [1] which was soon replaced by Xenon light flashes, and eventually by coherent monochromatic radiation, once powerful lasers became available. The latter first proved successful coagulation in clinical trials in the early 1970s [2].

Traditional laser photocoagulation (LPC) utilizes thermal interaction by which light energy absorbed by tissue pigments is converted to heat, thus causing photothermal denaturation whereas the temperature rise is dependent on laser power, wavelength and exposure time, in addition to the properties of the target tissue [3]. LPC techniques have been a mainstay for the treatment of several

B. Považay

R. Brinkmann (\*) Institute of Biomedical Optics, University of Lübeck, Lübeck, Germany

Medical Laser Center Lübeck, Lübeck, Germany

M. Stoller Meridian AG, Thun, Switzerland

R. Kessler Heidelberg Engineering GmbH, Heidelberg, Germany

retinal diseases. Multiple hypotheses regarding the biological effects of the different diseases are still in discussion, including reduction of oxygen demand by destroying photoreceptors, alterations of the local permeability and laser stimulation of RPE in edematous areals for proliferation and rejuvenation. Independent of the lack of a full understanding of the underlying mode of action, LPC was found beneficial to prevent retinal detachment after retinal hole formation [1], for the treatment of diabetic retinopathy (DR) [4], diabetic macular edema (DME) [5], neovascular age-related macular degeneration (AMD) [6], for central serous retinopathy (CSR) [7] and some other retinal pathologies.

LPC has been shown to be very successful in panretinal photocoagulation for DR hypothesizing that a reduction in oxygen demand by coagulating retinal tissue in the periphery preserves the central vision. Further, a macular grid treatment for DME was and still is very successful with respect to maintaining vision and reducing oedema in the central macula.

Laser therapy for AMD has an even more changeful history that was driven by the availability of laser technology and diagnostic tools. Starting in the 1970s, laser parameters used in early studies up to the late 1990s and even at the beginning of the new millennium were widely inconsistent. Reports on improvements as well as lack of correlation in respect to the treatment

**11**

HuCE optoLab, Bern University of Applied Sciences, Biel, Switzerland

can be found. In most cases, researchers utilized readily available laser diodes or argon ion lasers with the ability to irradiate tissue in continuous or quasi continuous wave (cw) irradiation modes. As most recently reviewed in [8] studies involved fellow-eye treatment of drusen sites and their short-term visibility as well as long-term changes. Indicating either slightly more positive or negative outcomes for AMD, the inconsistency in the results might be associated with different levels of choroidal neovascularization due to inflammatory response caused by necrosis of RPE and neighbouring tissue. LPC posed an important tool before the advent of a wide range of pharmaco-therapeutical agents that diminished the significance of laser-treatment to timecritical cases. Despite the drugs' success, several clinical trials indicate that therapeutic effects of intravitreal agents such as anti-VEGF injections and steroid implants are short-term compared to those of traditional laser photocoagulation therapy (LPC) and furthermore have the drawback that their application cannot be precisely controlled spatially [9]. To avoid regular arduous, cost-intensive and also risky injections required for chemical treatment of the chronic and recurrent disease, improvements in laser irradiation were pursued.

Due to the slow heating process and the strong heat dissipation into the surrounding tissue when using millisecond laser exposure, massive collateral damage of surrounding components like Bruch's membrane and choriocapillaris as well as the particularly healthy overlying photoreceptor (PR) cells is unavoidable [10, 11]. These adverse effects can lead to scotoma, reduced night vision, and disruption of the retinal anatomy through scarring [12, 13].

In the last decade, microsecond pulsed lasers in the green spectral range have become readily available for clinical applications that try to induce a specific therapeutic effect, but avoid damage of the PR, neural retina and choroid by selective targeting of different fundus structures. The following sections describe the laser effects on the RPE layer, the idea of its selective treatment, the underlying physics as well as initial experiments and clinical study results.

#### **11.2 The Concept and State of the Art of Selective Retina Therapy**

The retinal pigment epithelium (RPE) is a monocellular layer located at the outer retina between the photoreceptors on its apical side and Bruch's membrane adjacent on its basal side (Fig. 11.1a). The RPE cells are connected with tight junctions and thus forming the blood-retina barrier to the choroid. The RPE has multiple tasks [14]; upon many others it controls the outer retinal metabolism and continuously pumps water diffusing from the vitreous body into the retina towards the choroid, in order to prevent retinal swelling. The cells apical microvilli encompass the rear parts of the rods and cones outer segments, which are frequently shed and phagocytosed within the RPE cells to regenerate 11-cis-retinaldehyde serving as the visual pigment crucial in the vision cycle.

electron microscopies of a young (**a**) [18] and an (70 years) old human RPE cell (**b**) [16]. *zk* nucleus, *pg* pigment (melanosome), *as* photoreceptor outer segment, *bm* Bruch's membrane, *L* lipofuscin (partly comprising pg at arrowheads)

Metabolic, lipid rich end products emerging from this process are supplied toward the choroid. With respect to molecular transport of proteins, oxygen and water, the RPE and Bruch's membrane represent a diffusion barrier. With increasing age, RPE cells thicken and become heavily loaded with metabolic fatty end products like lipofuscin resulting from the lysosomal degradation pathways (Fig. 11.1b) [15, 16]. With the concomitant thickening of Bruch's membrane, eventually the water transport to and nutrition transport from the choroid is heavily compromised [17].

For a variety of retinal diseases, especially those which are thought to be associated with a degradation and reduced metabolism at the retina, it might be elegant to selectively eliminate specific regions of the monocellular RPE layer and thereby trigger its regeneration by wound healing. It is known that the RPE is able to rapidly close small wounds as found after mild photocoagulation. Such a cellular rejuvenation might lead to an improved metabolism, molecular transport and retinal functionality. However, the adjacent highly sensitive neural retina and choroid should not be affected by such a process.

The question arose if and how such a selective RPE treatment can be induced. In 1983 Anderson and Parrish published the idea of selective photothermolysis based on the concept of an absorption contrast with a stronger light absorption in the target area embedded in a less absorbing surroundings [19]. Selective treatment is postulated when using repetitive laser pulses with peak temperatures allowing sufficient denaturation rates just at the absorbing sites. This concept is perfectly suited for the RPE cells being heavily loaded with strongly light absorbing melanosomes (Fig. 11.1: 100–200 per cell, each about 1 μm in size). About 20–80% of light in the green spectral region is absorbed by the RPE [15]. Melanosome distribution and absorption varies inter- and intraindividually. It typically increases towards the periphery and decreases with age [20].

In order to selectively heat and damage the RPE only, without affecting the surroundings, heating should take place far below the RPE's thermal confinement time, defined as decreasing of the maximum temperature rise to 36% (1/e). When assuming a 3 μm uniformly absorbing sheet, neglecting the individual melanosomes, and a much larger heating spot diameter, then about 30 μs are estimated. Thus using heating times far below 30 μs should enable strong temperatures at the RPE with strong temperature gradients towards the much less absorbing environment [21]. Such a quick heating can be obtained with high power laser pulses, preferentially at wavelengths in the green spectral range due to high melanin absorbance. Unfortunately, μs heating time requires very high irradiance and thereby laser power in order to obtain temperatures for a sufficient thermal damage rate. According to the Arrhenius theory describing thermal damage as a first order rate process [3], a 30 μs single pulse already requires temperatures exceeding 100 °C, when taking the commonly used Arrhenius constants for retinal denaturation into account. However, as thermal damage is accumulative, applying multiple pulses with lower damage rates was assumed to be also sufficient, provided that an appropriate cooling time between the pulses is met in order not to lose selectivity owing to background heating by thermal diffusion.

#### **11.2.1 Experimental Results**

First promising results on such selective photocoagulation of the RPE by means of a high power chopped argon ion laser at 514 nm with a pulse duration of 5 μs and a repetition rate of 500 Hz on rabbits utilizing a 110 μm spot diameter have been published by Roider et al. [21]. In order to achieve the required irradiance an increase of the necessary power for larger spots and shorter pulses, a frequency-doubled and Q-modulated Nd:YLF laser with intracavity second harmonic generation to a wavelength of 527 nm was developed, which allowed to adjust the pulse duration between 250 ns and 3 μs with almost constant peak power and pulse energies around 1 mJ [22]. The laser was used to investigate the damage thresholds depending on the pulse energy and duration as well as the number of pulses on porcine RPE explants ex vivo by means of a vitality stain. Figure 11.2 shows that the effective dose for 50% probability (ED50) thresholds of cell damage increases from 120 mJ/cm2 at 250 ns almost linearly with pulse duration up to 220 mJ/ cm2 for 3 μs for single pulses. Increasing the number of pulses reduces the threshold radiant exposure per pulse by 50% (250 ns pulse) and 20% (1 and 3 μs) with a saturation observed at around 500 pulses, respectively [22], although the accumulated radiant exposure is increased within the exposure region.

As an additional concept, a continuous wave argon ion laser beam was used in a rapid scanning mode also providing μs illumination times. Using an 19 μm laser spot diameter at a scanning speed of 11.7 m/s in order to achieve a 1.6 μs irradiation time, a threshold power of 569 mW was found for ten repetitive scans, which corresponds to a threshold radiant exposure of 297 mJ/ cm2 [23]. Thresholds found here were slightly higher than with the pulsed application, but also showed saturation for multiple hundred scans (Fig. 11.2) [23]. The technique was approved on rabbits showing the desired RPE effects [24]. Interestingly, the retinal spot scanning speed of Heidelberg Engineering's confocal scanning laser ophthalmoscopes (cSLO) is around 70 m/s which corresponds to a single spot irradiation time of about 270 ns. This irradiation time had already been proved with a pulsed laser in clinical trials [25] and thus a cSLO represents another promising irradiation modality for SRT.

However, the threshold dependence on the pulse duration and the saturation with higher number of pulses found were in contradiction to the thermal damage theory of Arrhenius. Thus other mechanisms of cell damage had been taken into account. Lin et al. showed selective damage of artificially pigmented cells by microbubble formation (MBF) around phagocytosed absorbers induced by Q-switched laser pulses [26]. Microbubble occurrence always coincided with immediate cell death observed with a vitality stain. In order to

**Fig. 11.2** Threshold radiant exposure for RPE cell damage on the number of pulses applied for different irradiation modalities and parameters [22, 23]

understand MBF closer, the melanosomes and the granular structure of the absorbers needed to be taken into account. The thermal confinement time for a 1 μm diameter melanosome can be estimated to about 450 ns [27]. Microbubbles nucleate heterogeneously at the surface of the melanosomes and then grow around the melanosomes [27]. Investigations on the bubble dynamics revealed a growth of the bubbles proportional to the radiant exposure for ns pulse duration, but an oscillation with limited maximal size of about 4 μm in a selflimiting process when using μs pulse durations [28]. Investigating the size of the microbubbles around single melanosomes by using fast flash photography scales proportionally with their lifetime according to the Rayleigh equation, which is also valid for cavitation bubbles. Nucleation temperatures have been experimentally found to be around 136 °C for ns [29] and 157 °C for 1.8 μs pulse duration [27], respectively. These temperatures correspond to the nucleation temperature of water under a pressure of 3–4 bar, which also represents the surface tension of a microbubble that needs to be overcome for growing. Temperature and Arrhenius calculations using a mash of melanosomes interestingly also showed that the threshold dependence experimentally found with respect to the pulse duration is only applicable at the surface of the melanosomes [30].

MBF at melanosome clusters within RPE cells has been visualized by interferometry and fast flash photography, supported by vitality stains (Fig. 11.3) [31]. The dynamics of these bubble clusters reveal a nucleation on individual melanosomes and a later coalescence of these microbubbles to larger macrobubbles, especially with radiant exposures significantly above the threshold. Bubble wall speeds of typically 5 m/s in the expansion phase were found, which reach up to 30 m/s in the collapse phase [32]. In contrast to direct thermal expansion of tissue, the strong volume change due to bubble expansion is likely responsible for lifting the complete retina as observed in OCT [33]. When multiple microbubbles nucleate within a RPE cell, its volume increases instantaneously and disrupts the cellular membranes [34]. Consequently it is likely that this RPE damage mechanism can be attributed to a thermomechanical disruption.

**Fig. 11.3** RPE explant pictures [31] taken before (**a**), 200 ns after (**b**) and 1 s after (**c**) irradiation within the area (green circles). The highly reflective points (white regions) in (**b**) represent light scattered from multiple

microbubbles. (**d**) Shows the laser pulse (green) and the interferometric transient (red) over time measured in the central circular area in (**b**) indicated with the white circle

Experiments on rabbits in vivo revealed that RPE disruption is ophthalmoscopically invisible with white light illumination during treatment [35]. However, it can be demarcated by fluorescence angiography. In case of tight junction disruption within the RPE, fluorescein can diffuse through the broken blood retina barrier and fluorescing spots are observed at these locations. Typical ED50 threshold radiant exposures for μs pulses (100 pulses with a single pulse duration of 1.7 μs) are determined by angiography to 131 mJ/cm2 , and therefore are comparable to those ones found on porcine RPE explants ex vivo labelled by a vitality stain. At radiant exposure about twice the ED50 RPE damage threshold, effects become optical visible [12]. They appear like mild whitish photocoagulations, but can most likely be attributed to increased light scattering due to the mechanical dislocation and disruption of retinal tissue by large coalesced macrobubbles. With further increase of the radiant exposure, these macrobubbles lead to retinal/ choroidal disruption with bleedings as typically observed after Q-switched laser accidents. For a safe and selective treatment, a therapeutic window with radiant exposures slightly above the MBF threshold and far below twice the MBF threshold should be considered. When investigating the cellular damage mechanisms on pulse duration close to threshold irradiation in more detail, it shows that the predominant origin is thermomechanical for pulse durations up to 20 μs and thermal for pulse durations of 50 μs and larger [36, 37].

Histology and electron microscopy collected at different points in time after irradiation of rabbit eyes display disrupted RPE cells without damage of surrounding tissues [38], thus confirming the selectivity of RPE damage. Depending on the site, the wound is completely recovered with RPE cells within less than 1 week [39, 40] as shown in Fig. 11.4. As a first reaction, living RPE cells at the rim of the damaged zone extend and migrate into the ablation site and close it most rapidly. Further, cell mitosis and proliferation restore the original cell density (Fig. 11.5), which is considered to be the most important step towards regeneration of the RPE as the basic concept of the therapy. Histology further reveals that the photoreceptors in the adjacent layer remain functional. This is confirmed by the ongoing growing of their outer segments during wound healing due to the temporal absence of the RPE phagocytotic activity. Their normal length is resumed after wound healing is completed, which also proves RPE metabolic function. Multifocal electrograms of recovered tissue revealed no significant reduction in amplitude after SRT compared to standard photocoagulation as another proof of preserving retinal function [41, 42].

Cytokine release from RPE also alters after irradiation; an investigation with ex vivo RPE showed a decrease of the VEGF and an increase of the PEDF secretion from RPE cells 3 days after irradiation [39]. Proinflammatory cytokines, including IL-1β and TNF-α, showed sig-

**Fig. 11.4** RPE porcine explants irradiated with 140 mJ/ cm2 per pulse stained with the vitality marker Calcein AM showing vital cells with green fluorescence [39]. Immediately after treatment the square laser beam profile

is visible (**a**). Four days after SRT, irregular shaped cells with conspicuous nuclei covered the laser-induced RPE wound (**b**). Some of these enlarged cells showed two nuclei (arrow), which suggest mitosis

**Fig. 11.5** Immunofluorescence staining of RPE–choroid explants [39]. (**a**) Displays mitosis indicated by the Ki-67 marker (orange) at days after SRT. Localized positive signals (arrows) (orange) in cells surrounding the SRT area indicate RPE regeneration. Untreated RPE showed no sig-

nificantly lower levels after SRT than in LPC [43]. It has also been shown that matrix metalloproteinases (MMPs), particularly the active form of MMP-2 is significantly upregulated [39, 40, 44] and also locally resolved as shown in Fig. 11.5 [39], which is assumed to play a positive role in preventing matrix degradation of Bruch's membrane, and eventually increasing transmembrane molecular exchange. Human donor samples of Bruch's membranes incubated with MMP-2 showed considerably improved hydraulic conductivity and transport capabilities [17, 40]. Interestingly, the upregulation of active MMP-2 and PEDF in the whole RPE explant are more apparent with 200 μm spots compared to 100 μm spots when the total damage area was kept constant [39]. This suggests that the spot size of the RPE damage may also have an influence on the subsequent cytokine release and thus may affect the therapeutic effect. In conclusion, RPE cell regeneration together with an improved molecular diffusion rate through Bruch's membrane might contribute to the therapeutic effectiveness, as the complete choroidal-retinal complex is affected by SRT.

nal for Ki-67. (**b**) Using MMP-2 staining reveals a localized expression of MMP-2 (purple) in cell nuclei (**b**, arrows) of cells, which migrate into the SRT lesion after treatment, while untreated RPE only showed MMP-2 sporadically at cell membranes

#### **11.2.2 Clinical Study Results**

First SRT was undertaken to prove the concept clinically and to investigate its potential for treatment of different retinal diseases, focusing on three retinal pathologies: diabetic macular edema (DME), central serous retinopathy (CSR), and drusen in age-related macular degeneration (AMD). A Q-modulated frequency-doubled Nd:YLF-laser operating at a wavelength of 527 nm was used with a pulse duration of 1.7 μs and a repetition rate of 100 or 500 Hz to apply 30 or 100 pulses, respectively, per treatment spot on the retina onto a diameter of 160 μm [45, 46]. In order to find the sufficient pulse energies for successful irradiation closely above MBF threshold, titrations were performed prior to treatment at the arcades in combination with fluorescence angiography to demarcate the broken blood-retina barrier (Fig. 11.6). As observed in rabbits, SRT spots become directly optically visible by white light illumination at the slit lamp immediately after irradiation when exceeding a factor of about 2 above the angiographically determined threshold radiant exposure [47, 48].

**Fig. 11.6** Fundus picture (**a**) and fluorescein angiogram (**b**) of a CSR patient's fundus after SRT [47]. SRT lesions clearly show up in angiography. Test lesions with increasing energy for dosimetry were performed in the arcades, only highest pulse energies also lead to a slightly directly

visible effect in the fundus photo (yellow arrows). Lowest angiographically visible radiant exposures were used for treatment in the central macula (green arrow). For comparison to standard photocoagulation, a fundus image after typical panretinal photocoagulation is shown in (**c**)

Lowest angiographically visible radiant exposures were found in the range of 350–500 mJ/ cm2 per pulse in test expositions at the arcades, which is about 2.7–3.8 times higher than the rabbits angiographic ED50. For treatment, typically 650 mJ/cm2 were used to compensate for the variation in pigmentation, which is typically lower in the central macula. It quickly turned out that the repetition rate of 500 Hz is too high for a spot diameter of 160 μm in order to allow sufficient cooling between the pulses. Successively, until today 100 Hz is the standard repetition rate used.

The reason for the much higher required radiant exposures compared to the very young rabbit and pig is very likely related to the higher light scattering within the patient's eye before the radiation can reach the retina. Boettner and Wolter measured a strongly reduced direct light transmission through human eyes with age [49]. Only about 40% direct light transmittance in the green spectral range was measured for a 53 year old eye. This value further reduces with age owing to increased light scattering predominately in the lens due to age-related cataract, therefore well explaining the higher thresholds for clinical SRT. These findings also coincide with temperature measurements during retinal photocoagulation: Only about 20–30% of the green light entering the eye reached the target site at the retina and led to the temperature rise measured by optoacoustics in patients [50].

With respect to the therapeutic outcome in the follow-up period, hard exudates disappeared in 6 out of 9 patients, retinal edema resolved in 6 out of 12 diabetic patients, drusen were reduced in 7 out of 10 AMD patients and leakage in CSR disappeared in 3 out of 4 cases [46]. Investigations of SRT compared to photocoagulation lesions directly after treatment utilizing optical coherence tomography (OCT) revealed no OCT-visible hyperreflectivity at the SRT sites in contrast to the strongly scattering photocoagulation spots [51]. In both cases, the RPE appeared thinner in the follow-up. After 4 weeks, RPE thickening likely due to overproliferation was observed after SRT. One year after treatment, photocoagulation sites were characterized by RPE and neurosensory tissue atrophy, in contrast SRT lesions showed unaffected neurosensory structures and an intact RPE layer.

Encouraged by these promising pilot study results, an international SRT multicenter trial was conducted to evaluate the therapeutic effect of SRT in 60 patients with diabetic maculopathy [52]. In this study, 30 pulses with a single pulse duration of 1.7 μs at 100 Hz were applied per spot with a top hat beam profile of 210 μm on the retina. A Nd:YLF laser with intracavity overcoupled second harmonic generation was developed to generate μs laser pulses at 527 nm [53]. Typical pulse energies for DME treatments were 200–325 μJ, which refer to pulse powers between 117 and 191 W and calculated radiant exposures between 577 and 938 mJ/cm2 , respectively [52]. In 95% of the 60 patients the visual acuity improved or was stable at the 6 month follow-up control. The angiographic results with respect to leakage areas and OCT on edema size and thickness, however, didn't fully coincide with respect to the visual acuity. Further studies on DME [54], with macular edema after venous branch or central venous occlusion [55] and with geographic atrophy [56], were conducted.

Most impressive are study results for the treatment of central serous retinopathy (CSR). A study on 27 patients with active CSR showed a resolved subretinal fluid in 85.2% of the patients after 4 weeks and 100% after 3 months with no visible leakage. Mean visual acuity improved from 20/40 at baseline to 20/20 at 3 months [57]. A study randomizing patients to immediate treatment and 3 month waiting time also showed significant benefits and quick fluid dissolution for the patients treated immediately [58]. A study conducted in Japan supported these findings [59]. With respect to the safety of the treatment, microperimetry and multifocal ERG did not show any functional retinal defects, as typically found after standard photocoagulation [59]. Another pilot study conducted in Korea (527 nm, 100 Hz, 30 pulses per spot, 210 μm diameter on retina) on chronic (>3 months) CSR patients revealed much lower pulse energies between 65 and 90 μJ needed for selective RPE effects [48]. Angiographically visible effects were observed with as little as 70 μJ (200 mJ/ cm2 ). Reasons for these lower radiant exposures are likely the relatively clear media of the younger CSR patients, the higher pigmentation of Asian eyes as well as the very strong intensity modulation on the lateral laser beam profile with that laser used. However, only 3.8 laser spots in average per patient were applied around the leakage point leading to a complete resolution of subretinal fluid in 75% of the patients after 9 months. Only very few spots around the feeder vessel need to be applied in the central macula, in contrast to a study using a non-damaging laser therapy using 548 laser spots in average per patient [60]. Thus for CSR, SRT seems to be an ideal treatment modality.

Another clinical study investigated shorter pulse durations between 200 and 300 ns for SRT, which also exhibited selective RPE damage and even lower pulse energies compared to 1.7 μs due to the reduced heat diffusion during irradiation [25, 61]. However, since the individual nucleation times of the multiple microbubbles converge, the dynamics becomes more explosive and the risk of unintended bleedings rises, as clinically observed following the impact of single 3 ns laser pulses when using another technique for retinal rejuvenation [62].

The primary disadvantage during SRT treatment is the lack of direct optical feedback due to the low or missing visibility of SRT lesions. On the one hand, it strongly limits the ophthalmologist ability to interactively place the lesions in the desired areas. Neither reference grids for DME nor well localized circles or patterns in CSR treatment can be placed due to lack of visual feedback. On the other hand, with respect to proper energy dosage, studies showed a strong inter- and intraindividual variation of the necessary pulse energies for comparable RPE effects [63, 64]. Even with titration at the arcades, angiography after treatment is required to ensure RPE defects due to the variability of the absorption. In case of negative angiography outcome, retreatment with higher energies is required, which is a tedious procedure with increased risk for the patient owing to multiple angiographies. The state of the art to improve and control dosage will be discussed in the next paragraph.

#### **11.2.3 Dosimetry and Dosing Control**

Microbubble formation (MBF) at the intracellular RPE melanosomes and their dynamics have been identified as the origin eventually leading to thermomechanical photodisruption of the cells [22, 34]. As long as the irradiation takes place in the therapeutic window with radiant exposures slightly above the MBF threshold that only causes small separated microbubbles, a selective cell damage without any adverse effects to the photoreceptors and the choroid has been proven in multiple clinical trials [45, 46, 48, 51, 52, 54, 55, 57–59, 61, 65–67]. MBF and blood-retina breakdown by the RPE disintegration shown by fluorescence angiography coincide very closely, hence the occurrence of microbubbles can be taken as a proof of successful RPE damage [63, 64]. Fortunately, microbubble formation and their dynamics can indirectly and non-invasively be measured in vivo during irradiation by different methods:


With appropriately fast acquisition and rapid data evaluation, these techniques can be used to provide the attending physician with immediate feedback on the correct dosage. It can further be aggrandized for a fully automated feedbackcontrolled dosing to unburden the clinician from any manual dosing. As a useful strategy, the train of pulses per spot is ramped up and the occurrence of MBF is measured after each individual pulse. As soon as MBF appears the irradiation can be automatically ceased, either immediately or few pulses later if desired [68]. Figure 11.7a shows such a ramp with a stepwise increase over 15 pulses from 70 to 140 μJ exemplary. The technique has been approved in preclinical trials with automatic feedback control [69, 70].

With respect to clinical applications within the approval phase of such technique, currently the physician is guided by the system as follows: The clinician chooses the maximal energy of the last pulse and the system applies a stepwise ramp beginning with 50% of the maximal pulse energy. MBF is measured after each single pulse. After all 15 pulses have been applied, the system proposes to keep the energy for the next spot if MBF is measured with pulses in the middle of the ramp. In case of MBF at the beginning or end of the ramp, the physician is asked to decrease or increase the maximal energy, respectively [54, 71].

In order to detect MBF, optical and acoustical transients can be measured and evaluated with appropriate techniques for each individual pulse in real time, which will be explained in the following subsections. Exemplary Fig. 11.7 shows the applied energy ramp and

**Fig. 11.7** Pulse energy ramp for a SRT spot (**a**) with acoustical (**b**) and optical transients (**c**) color-coded for the same laser spot, exemplary from a clinical trial using both techniques simultaneously [64]. Blue colours represent sub MBF energies. Pulse # 8 (light green) first shows clear modulations on both the acoustical and optical transients, respectively, compared to the previous pulses, indicating that the MBF threshold has passed. Orange and red

lines denote pulses with strong modulations associated with large and likely coalesced bubbles. In case of an automated feedback, laser emission should be ceased when reaching the green or orange pulses to safely stay within the therapeutic window in order to prevent large thermomechanical disruption. This can lead to collateral damage, including bleeding as it has been observed after therapeutic 3 ns laser exposure [62]

Time / µs

the according transients recorded during a clinical study at the University eye clinics in Kiel (UKSH) and Hannover (MHH) [64].

#### (a) Optoacoustics

Light absorbed by tissue is predominately converted to heat, which leads to its thermoelastic expansion and successive contraction by the emission of a bipolar pressure wave [72]. Its frequency depends on the thickness of the absorber and the spot size, while its amplitude is proportional to the pulse energy and the strongly temperature dependent Grüneisen parameter. Therefore optoacoustics can be used to calculate the tissue's temperature increase during SRT [73] and also during standard photocoagulation [50]. Typically, frequencies after heating tissue with ns to μs pulses are in the MHz and therefore in the ultrasonic range. The acoustic waves travel back through the eye and can be measured at the cornea. As the pressure amplitudes are in the range of hundred microbars only, a very sensitive sensor is required. Typically, an annular shaped piezoelectric ultrasonic transducer is embedded in the contact lens, which is used anyway for laser treatments utilizing a laser slit lamp [73]. Figure 11.7b (blue curves) show such transients when using SRT at low radiant exposures [64]. Since the pressure is proportional to the pulse energy, all transients are almost identical when normalizing on the energy (Fig. 11.7b). If the temperatures exceed the vaporization temperature at the surface of melanosomes, microbubbles form and expand [74, 75], and successively emit further individual pressure waves. As the nucleation onset times at the different melanosomes differ slightly, and also differ among different pulses owing to the displacement of the melanosomes by the bubbles, a phase and amplitude fluctuation of the transients is observed as exemplary shown in Fig. 11.7b (green and red curves). With an algorithm developed for taking into temporal deviations among the acoustic transients, Schüle et al. introduced an optoacoustic value (OA value) [63]. With an appropriate threshold OA-value, RPE damage upon fluorescein angiography can be predicted with high specificity [76, 77]. Also other algorithms based on further signal features can be processed, giving a higher security for the feedback when processed in parallel [77]. An appropriate threshold OA-value can be used in real time to interrupt the laser ramping and emission after MBF exceedance. The beauty of this approach is that the excitation is induced by light while the response is based on acoustics, since the pressure waves are almost not affected by the ocular media. From the perspective of bubble formation, the correct focus of the light or the beam profile itself play only very minor roles. The technique has been proven in clinical studies by displaying the OA-value to the ophthalmologist via a monitor [52, 63] or verbally [59] in order to give them immediate feedback whether the irradiation energy was sufficient. In recent clinical studies, it was also used for dosing guidance [54, 71].

(b) Light reflection

During laser irradiation of the retina, some of the treatment light is scattered back from the RPE and can be measured in front of the eye. If only tissue heating takes place, its reflective properties stay almost constant and the pulse shape of the back-reflected light is identical to the incoming one (Fig. 11.7c). After MBF however, microbubbles cause a refractive index change at the bubble's surface between tissue on the outside, essentially consisting of liquid water (n = 1.4) and water vapor inside the bubble. (n = 1) of about 2–3 orders of magnitude. Further the light is modulated owing to the dynamics of the bubble expansion and collapse. Figure 11.7c (green and orange curves) shows such back-reflected pulse shapes which strongly deviate from those below bubble-formation threshold. This method was developed by Seifert et al. [70] with respect to calculate a reflectometry value (RE-value). Data from clinical trials were recently analyzed and mathematically optimized in order to obtain a threshold RE-value with highest sensitivity and specificity [64] for an automatic real-time stop of the pulse energy ramp. The technique is currently under investigation in clinical trials [54, 64, 71, 77]. The strength of this method is that the high power laser light used for treatment can be evaluated directly. Therefore on the hardware side, nothing but a fast photodiode is required to record the strong backscattered light. However, in contrast to optoacoustics, the reflected light intensity is influenced by the ocular media as lens and cornea and can strongly vary as also the backscatter of a range of small spherical 'bubble-foam' is highly dependent on the particular microbubble distribution.

(c) Small spectral bandwidth interferometry

The occurrence and dynamics of microbubbles can be well probed by interferometry using small bandwidth laser light with large coherence length as typically provided by single longitudinal mode lasers. By means of a fast photodiode collecting the reflected light from the eye in the object arm of the interferometer, microbubble dynamics can be recorded at very high temporal resolution and thus provide more information than just the MBF threshold detected by OA- and REvalues. With a single frequency laser diode, the heating of the tissue in form of a slight intensity shift due to the thermal expansion of the RPE, as well as the onset of microbubbles (modulations in Fig. 11.3), their coalescence and collapse, and finally the slow thermal contraction after irradiation have been monitored [31, 32]. Thus small bandwidth interferometry can likely be used for fine dosing control in SRT [78], however has not been developed far enough for approved clinical trials.

(d) Broad spectral bandwidth—OCT

Low coherence or white light interferometry (LCI) responds similarly to reflectivity changes like any of the other optical approaches. The broad bandwidth and the coherence between the different wavelengths, however, can also be utilized to transmit spatial information of the reflector in the light propagation direction. Optical coherence tomography combines this technology with a lateral scan and enables to obtain images of reflectivity in form of axiolateral cross-sections or volumes. The availability of performing OCT during retinal therapy exposure for observing signal variations correlated to the creation of RPE lesions has already been investigated at times where early frequency-domain OCT system have become widely available [79]. However, at that time the temporal resolution was only sufficient to monitor thermal expansion of tissue. For real-time diagnostics of laser-tissue interaction, two time-frames that require quite different acquisition speeds have to be distinguished. One is the thermal expansion regime that is specified by thermal diffusivity in the range of up to 0.16 mm2 /s for biological tissue [80], leading to speeds of ~80 mm/s and the other is where acoustic waves with ~1500 m/s dominate the energy transfer to surrounding tissue [81]. Several groups have investigated structural OCT as an imaging modality to compare retinal layers before and after treatment [51, 82], or functional Doppler OCT for thermal expansion measurements [79, 83]. This direct imaging of thermal expansion during the sufficiently slow heat diffusion process is straight forward with current OCT systems at moderate axial resolutions ~10 μm and ~100 kHz depth scan rates even for the lateral crosssections of OCT tomograms. Although modern OCT systems can scan remarkably faster (>1 MHz depth scan rate, being <1 μs exposure time per depth scan) along with a reduction of temporal resolution of the lateral scan, the 4 orders of magnitude faster shockwaves induced by a short treatment pulse in the sub 10 μs scale cannot be directly resolved in an 2D image. This becomes even more restricting in typically slower ultrahigh-resolution systems with a spectral bandwidth of more than 100 nm that supports an *in vivo* axial resolution of about 3 μm or below, which is required for clear visualization of cellular structures in the RPE layer and differentiation from its surrounding tissue [84].

During a burst of successive pulses, optical reflectance from the irradiation laser increases in signal strength. Feedback from reflectance of a forming microbubble is a simple optical technique for thresholding the exposure [41], as has already pointed out above. LCI, or single position, non-scanning OCT utilizes the full temporal bandwidth for real-time SRT quantification of the shockwave formation and can monitor effects caused by shockwaves even at kHz scan rates. In OCT systems it is thus an elegant method to reuse an existing detection channel even though it sacrifices the lateral scan directions. Such timeresolved OCT has been utilized to detect slower effects in even less aggressive retina-light interaction in so called optophysiology before it was implemented for SRT [33, 85]. Large distortions induced by laser pulses can be indirectly detected as a temporal change of intensity in OCT depth scans (A-scans, amplitude mode), which corresponds to the local reflectivity of tissue. An M-scan (motion mode) visualization, which is a time-resolved sequence of A-scans facilitates the recognition of differences (Fig. 11.8b). To extract changes in the structure time frequency analysis (TFA) of the signal can help to detect subtle changes in the M-scan. Shorter spikes only covering part of the frequency spectrum indicate slower temporal fluctuations within the M-scan (red vertical spikes in Fig. 11.8c at start and end of the laser irradiation are indicated by arrows). Large interruptions become apparent as stripes across all frequencies. In the M-scan zoom-in (Fig. 11.8a), these changes are obvious blackouts with complete signal deletions.

#### **11.3 OCT for SRT Dosimetry**

#### **11.3.1 Hypothesis of Fringe Washouts in M-Scan OCT**

The origin of the above-mentioned complete OCT signal loss can be explained by coherent signal washout. The coherent decorrelation or "fringe-washout" results from rapid fluctuations of the spectral phase, which are faster than the integration time of the detector, such that the time-averaged signal becomes almost zero. This effect is well known in OCT images of larger blood vessels, especially for those oriented parallel to the measurement beam. Their lumen appears dark due to the fast turbulent blood flow of moving cells that generate fast modulation of the phase [87]. This effect differs from intensity

**Fig. 11.8** OCT M-scans of an ophthalmoscopically invisible SRT treatment lesion on *ex vivo* porcine eyes at high pulse energy and radiant exposure, well above the threshold for microbubble formation (180 μJ, 570 mJ/ cm2 ) [86]. (**a**) Time-frequency analysis (TFA) of speckle pattern at the RPE (blue-white dashed box in **b**) as well as an extract of 56 ms during laser application (**c**, yellow dotted box) clearly visualize changes caused by SRT application in both M-scan and TFA data (red arrowheads indicate pulse positions)

decorrelation, where the variation between successive OCT scans is numerically calculated for contrasting motion, while in coherent decorrelation the OCT signal's phase is already destroyed prior digitization. In coherent or phase decorrelation, the tissue scatterers are moved by a fast oscillating sound field at amplitudes in the range of the axial resolution, therefore the spectral phase is altered quicker than the exposure time of the irradiation pulses.

A typical OCT system operating at 20–100 kHz acquisition speed cannot resolve motion above 100 mm/s or equivalent speckle fluctuations and therefore will start to experience signal losses when the cohesion of neighboured scatterers within the OCT resolution volume is lost, which corresponds well to microbubble sizes of several micrometres. This distinct attenuation can be utilized as a sensitive indicator for the occurrence of strong axial motion associated with laser-induced microbubble formation within the monitored volume. Closer investigation of the M-scans reveals also smaller signal fluctuations that correspond to the smaller spikes in the TF analysis (Fig. 11.9d). In the M-scan these distortions exhibit a logarithmic decay that fits thermal dissipation of heat as expected by previous simulations [22].

This ability to discriminate effects due to sole heat-related changes from microbubble-related ones is a viable advantage to simple reflectometry. The reflectometry utilizes a swelling signal that is supposed to correlate with vapour-bubble size [89]. In OCT, however, no growing of signal intensity is found throughout the M-scan within a burst sequence, which indicates that decorre-

**Fig. 11.9** (**a**) M-scans of *ex vivo* treatments at multiple locations with constant pulse energy at 1.7 μs pulse length. Pulse energy increases from M-scan number 1 to 5 from 40 to 200 μJ (fluence: 200–1000 mJ/cm2 ). In the lowest energy M-scan no apparent signal washout is visible, while it increases with irradiance. (**b**) Top view onto RPE layer under bright field illumination. Strong lesions (5), with complete annihilation of the RPE relate to strong signal washouts in the corresponding M-scan. Weak lesions (2) conversely relate to weak M-scan signal washouts.

Spots without visible lesions in the RPE layer (1) correlate to M-scans without any signal washout. Magnifications of the signal washouts (vertical black lines) in M-scans induced by (**c**) a single laser pulse with 54 μJ and (**d**) a single laser pulse with pulse energy of 180 μJ indicate a difference in length of the signal loss and furthermore thermal distortions/fluctuations of the speckle for a longer time (~4 ms), especially in the retinal center, above the RPE/CC complex. Adapted from [88]

lation superposes the increase in reflectivity and can act as a sensitive indicator for bubble buildup. The distortions introduced are not completely independent of the bubble size; although there seems to be a slight upward drift within and between successive scans of the inner limiting membrane (ILM) signal edge that might indicate a vertical shift due to bubble formation within a burst-sequence; this also could be contributed to thermal expansion and is hard to interpret due to the speckle. In contrast, the length of the blackouts correlate well with radiant exposure/fluence and stronger pulses also experience a stronger structural change as well as an elongated relaxation period (Fig. 11.9).

#### **11.3.2 First Pre-clinical and Clinical Studies**

Pre-clinical studies could show that the signal changes in OCT M-scans during SRT allow real-time prediction of retinal lesions [33, 86, 88, 90, 91] also found in clinical experiments. Figure 11.10 demonstrates sub-visual (no visible coagulation spot in the fundus image e) *in vivo* treatment with clear response in the OCT scan in addition to a brightening in the fundus fluorescence angiography (FFA) scan after application of a 30 times pulse burst sequence at ~160 mJ/

cm2 fluence per pulse applied at 100 Hz repetition rate and 250 ns pulse length. This complies with the exposure levels for microbubble formation from previous studies, where the calculated fluence for MBF is expected between 140 and 240 mJ/cm2 . The calculated cumulative heating below 2 °C is contrary to data from quasi-cw exposure where the threshold for visible lesions was found at a ten times larger local temperature increase [79]. The absence of such a significant temperature increase opens a large laser process/ treatment window, before neighboured tissue is affected.

#### **11.3.3 Future Developments Towards Reliably Detecting the Microbubble Threshold with OCT**

Several strategies for limiting the exposure beneath the threshold for retinal heat damage have been discussed and exploited. Simply keeping the average power well beneath the average levels for coagulation misses the therapeutic window that spans between pure RPE and extended retinal damage due to the strong local absorption variations of about an order of magnitude. With typical average excursions of ~0.5° during microsaccades that mostly stay smaller than 1°

**Fig. 11.10** OCT M-scan of successful in vivo SRT treatment (**a**) with axial position tracking (**b**). (**c**) Represents the inset in (**b**, dotted box) with white arrows indicating SRT pulse application with 80 μJ (160 mJ/cm2 at 250 ns).

(**d**) and (**e**) show the corresponding FFA and the fundus image. Yellow arrows correspond to the position of the treatment spot shown in the OCT scans. Adapted from [86]

corresponding to shifts of ~288 μm on the retina during fixation focussing of treatment spots to ~200 μm is challenging. Therefore, repetitive application of short pulse bursts between microsaccades (~200 ms) to shorten the time frame below the instability of the human eye during fixation, is one way to adapt to the situation as it was suggested by the reflectometry controlled SRT model [89, 92].

Under the presumption that the appearance of a microbubble already indicates irreversible damage to the RPE and OCT reliably relates detected shockwaves to microbubble appearance, another approach is more sensible. Here the pulse train is successively ramped up in fluence until a MBF signal is reached or the maximum permitted exposure at full absorption is reached (see also Fig. 11.7a).

To successfully utilize a ramp mode, adaptive laser pulse energy control together with a fast and reliable method for exposure interruption is required. On top of the signal washout, which can be detected by simple signal thresholding the high sensitivity of OCT carries more information hidden in the sub-structure of the M-scan, including thermally induced fluctuations and possible secondary changes due to the microbubble formation. One way is to use human interpretation of the signals and to train a synthetic classification network. The network successively learns how to interpret different preselected signal features such as artifacts that compare subblocks with and without laser exposure, speckle changes, changes in the speckle variance or spectrum, similar to the TF analysis mentioned above and associates their appearance to the outcome. Besides the teaching with less clear *in vivo* data and direct human observation of the OCT signals together with TF analysis, this network can also be trained with *ex vivo* data that is based on histological results to objectively relate to selective RPE damage. As assumed, features found within the first 100 ms after exposure deliver the highest yield, while later portions of the signal between 100 and 300 ms after the exposure pulse continuously lose their relation to the distortion. Such automated analysis boosts prediction to success rates from initial ~60% beyond 90% when targeting for 95% specificity [93].

Artificial convolutional neuronal networks can take this approach even a step further to select their own internal features, rather than predefined ones. This currently en vogue industry driven approach has the advantage of exploring and utilizing information that has not been considered by the human supervisor, but also bears the risk to train features that are not directly related in a cause-effect relationship that is further encoded in deeper hidden network nodes. A typical example is an artificial neuronal network that is trained to distinguish dogs from wolves, but in the end only learns to react on the snowy background most frequently found in wolf pictures. Similar risks are found in the SRT application where patient, image background signals or system properties can influence the results, unless very large numbers of verified examples are used. The training of such a network on OCT signals, however, also brings novel insights that are significantly harder to quantify by a human observer. When investigating the significance of signal components during the time course for predicting the accuracy of the treatment outcome, it can be shown that window sizes of ~4 ms which also contain information from the status 200 μs prior to the applied pulse for the utilized system achieve the highest precision and recall, or in other words the lowest false positive and false negative rate [94]. An explanation for this behaviour is that 200 μs before exposure, relatively slow changes due to eye-motion do not affect the baseline, while longer ones already exhibit distortions.

For integration into a clinical system these artificial neuronal networks have the advantage of shifting the complexity into a hidden layer that requires enough high-quality data for training, but thereafter can be realized with hardware that is capable to perform decisions in real time even with these sophisticated tasks. Specialized hardware with low energy consumption, high efficacy and speed is currently becoming available. Nevertheless, a black box system for application in laser therapy implies strict control mechanisms. Assuming 5–30 steps of treatment pulses with increasing energy within the stable exposure window of 200 ms, reaction times of ~3–20 ms are the target for ramp interruption. Considering electronic acquisition and laser driver delays, such a decision has to be made within a millisecond or has to be performed one or several pulses later when the pulse energy has already grown. Optimizing the ramps start, end levels and slope will be a question to be answered together with the choice of the definite detection mechanism.

The following chapter goes back from this outlook to the current technical efforts and improvements. It describes in detail the realization of a complete prototype setup combining OCT-based dosing control, therapy planning and documentation.

#### **11.4 SRT Module Integration into the OCT Platform**

Early-on the Heidelberg SPECTRALIS platform, originally built on a confocal scanning ophthalmoscope already hosted a robust spectrometer-based frequency-domain OCT system and was capable of tracking lateral eye motion, circumventing the above-mentioned issues by establishing a different approach: Here the eye tracker allows for sufficiently precise and fast online tracking of the scanners lateral position such that imaging of OCT cross-sections become re-entrant. This means that scans can be interrupted and restarted at any point during the scan, and that imaging sessions can be repeated in long-term studies as "follow-ups", since the tracking algorithm aligns the optics with the long-term stable retinal vasculature, even with poor quality ocular media. Furthermore, the modular design of this commercial multimodal platform also provides access to other wavelength ranges, which can equally benefit from the eye tracker, as well as the integration of other optical measurement or treatment devices.

In a first collaboration with Heidelberg Engineering, the optoLab of the Bern University of Applied Sciences in Biel has modified the mechanical setup of the SPECTRALIS platform and has successfully integrated a second OCT spectrum-scanning light source at 1050 nm into the system. With this setup named SPECTRALIS Hydra, a simultaneous acquisition of OCT images in both wavelength ranges became possible. First clinical studies at highly myopic children have been performed in Hong Kong to examine early aged myopia [95].

Learnings from this implementation including the mechanical setup and electronic communication enhancements were the basis for integrating a retinal treatment laser instead of a second OCT light source. This new experimental setup named SPECTRALIS Centaurus offers a fast, smart and minimally invasive clinical tool for performing SRT.

The SPECTRALIS Centaurus system integrates hard-, firm- and software to deliver OCT controlled, automatically tissue-adapted power and exposure-limited pulses with the high lateral and axial accuracy provided by the confocal eye tracker [96]. The 532 nm treatment laser is collinearly coupled into the light path of the 870 nm broadband OCT spectrometer via a dichroic beam splitter after adjusting the spot diameter. Furthermore, the beam from the cSLO centred at 820 nm overlaps collinearly on the return path from the eye. To enable other imaging modalities such as 486 nm fluorescein angiography and 786 nm ICG (indocyanine green) angiography, the corresponding lasers are also integrated (see Fig. 11.11).

The SPECTRALIS Centaurus system features a novel compact μs laser system built on the experience with its predecessors. The experimental MERILAS SRT laser (Meridian AG) operates at the upper pulse length limit for SRT in comparison to its predecessors (Table 11.1). Not implementing a Q-switched laser that can accumulate energy is a choice that limits available fluence to shorter exposure times, but considerably eases implementation. With a spot diameter of 120 μm that can be reduced due to the ocular stabilization, the heat losses are slightly stronger than with 200 μm spots, but the radiant exposure is considerably increased. With a maximum radiant exposure just above the upper microbubble threshold, the linear growth of energy with pulse length is faster than the increase of the threshold and achieves about 150% at 4 μs correspond-

**Fig. 11.11** Configuration of the SPECTRALIS Centaurus system [96]. The 532 nm treatment laser is integrated into the specially modified optics of the adapted

OCT system (left portion in front of the galvanometric scanner pair) and allows interaction with the multiwavelength laser tracking and exposure system


**Table 11.1** Overview of parameters used for different SRT investigation systems

ing to ~350% of the minimum threshold fluence measured in melanosome heating experiments [22]. It will have to be seen if in clinical practice with patients with increased ocular opacity, i.e. due to cataract, these levels are sufficient for all treatment cases or higher laser pulse energy could be required. Anyhow, spot stabilization due to the ability of real-time eye tracking allows for even smaller spot sizes at the retina, with the side effect of more precisely mapped treatment areas. Furthermore, the system provides an easy way to perform standard photocoagulation treatment with the same system by selecting longer pulses.

Electrical signalling and firmware are already integrated together with basic software that might be extended for planning automated treatment purposes, such as automated mapping of a predefined region based on the fundus scan. In anticipation of clinical *in vivo* imaging and treatment the SPECTRALIS Centaurus system was adapted to support mounting of enucleated eyes (Fig. 11.12). Preliminary measurements already show results comparable to the preceding studies.

With first encouraging *ex vivo* experiments performed at the optoLab and the Medical Laser Center Lübeck (MLL) as well as the technical safety evaluation, it is planned to validate the SPECTRALIS Centaurus system for clinical *in vivo* studies.

**Fig. 11.12** Overview of the SRT-OCT setup and the measurement protocol [96]. (**a**) The SRT upgrade module has become an integral part of the SPECTRALIS platform and is (**b**) inserted into the compact measurement head. (**c**) The treatment laser is controlled by a foot switch and can be optionally suppressed by the real-time OCT safety

#### **11.5 Conclusions and Outlook**

The SPECTRALIS Centaurus system integrates a μs pulsed treatment with a high-speed confocal laser scanner in a compact design. Despite not being a commercial device, the platform has shown a possible way to clinical implementation. In addition to the long-term stable optomechatronic design, the "commercial-level class" platform features high reliability, low maintenance, combined with a simple and userfriendly interface and remote maintenance of the device. The integration of a fast, high-resolution 3D imaging modality with therapeutic capabilities enables detailed planning of precise control during potentially harmful treatment. The exact positioning of the treatment laser pulse in time and space, independent of the patient's movement, as well as the possibility of immediate pre-/post-diagnosis, and a feedback system allows for an unrivalled accuracy and fulfilment of the physician's pre-set targets including

mechanism. (**d**) For *ex vivo* measurements of entire eyes or single RPE explants a vertical mount with surface lubrication is mounted in front of the patient-side objective. (**e** and **f**) Eyes are cut open, stained with life/dead markers to extract pig retinae for investigation of laser burns under a fluorescence microscope

safety features that avoid overexposure of the retina. Current approaches for limiting exposure are already well developed and have started to become indispensable tools for minimally invasive optical surgery. The ophthalmic instrument market has shown that contactless, all-optical technologies have always been favoured by clinicians and patients alike, due to their simpler, speedier and safer application with higher convenience and lower risk of mechanical injury or of cross-contamination. Even though the principle mechanisms of laser tissue interaction are well known, the variability of biological materials and delicate alignment of the treatment system demand for considerably increased control for each individual exposure. Precision-guided *in vitro* experiments already suffer from variations caused by inhomogeneity of melanosome density and absorption coefficients within the RPE layer. To transfer the results of *ex vivo* trials to clinical ones that add multiple layers of complexity and variation, a high reproducibility of the instrument during clinical application is required. Literally in focus of the research, the critical laser parameters are linked to the available technology and reveal only parts of the theoretical possibilities. As novel compact and cost-effective lasers open up new possibilities, the perfect choice of laser treatment for certain diseases will require more advanced studies that will definitely benefit from a multi-functional device that provides full control during the application of the parameters, along with an advanced diagnostic toolset for planning, performing and reviewing or even aftercare. This not only provides the surgeon with a more precise instrument that improves the statistics, but also allows the patient to benefit from an optimal and individual treatment that results in a therapeutic outcome which close resembles the original plan and mitigates risks and unwanted side effects.

**Acknowledgments** We gratefully want to acknowledge the continuous and valuable support from the team members from Bern University of Applied Sciences in Biel: Christian Burri, Michael Peyer, Patrick Steiner, Daniel Kaufmann, Mathias Mooser, Patrik Arnold, Tiziano Ronchetti, Volker M. Koch, Joern Justiz, Anke Bossen, Patrick Morgenthaler, Dominik Inniger, Christoph Meier, the Medical Laser Center Luebeck: Dirk Theisen-Kunde, Veit Danicke, Alessa Hutfilz, the colleagues from Meridian AG, Thun: Michael Stetter, Rudolf von Niederhäusern, Eric Odenheimer and the Heidelberg Engineering team: Stefan Schmidt, Michael Reutter, Joerg Fischer, Tilman Otto that enabled the development and implementation of this technology.

#### **References**


towards slowing the macular ageing process. Exp Eye Res. 2012;97(1):63–72.


control and monitoring: a proof of concept study. Biomed Opt Express. 2018;9(7):3320–34.


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

**Part III**

# **Anterior Segment Imaging and Image Guided Treatment**

**12**

# **In Vivo Confocal Scanning Laser Microscopy**

Oliver Stachs, Rudolf F. Guthoff, and Silke Aumann

#### **12.1 Introduction**

In vivo imaging of corneal cells has been pioneered by confocal scanning laser microscopy (cSLM) and the availability of commercial devices. With the combination of the Heidelberg Retina Tomograph (HRT, Heidelberg Engineering GmbH) and the Rostock Cornea Module (RCM) a unique device using confocal point scanning technology was introduced in 2002 [1]. Since then, the HRT-RCM serves as a well-established instrument in experimental and clinical ophthalmology. The system represents an important device for ex and in vivo studies of animal and human corneas for a qualitative and partly quantitative analysis of corneal and limbal structures.

CSLM is a non-invasive imaging technique which generates transversal images (also called en-face images), with high resolution and excellent depth discrimination. Sequential acquisition of tomograms along the depth direction allows for 3D reconstruction of the volumetric data stack. The

S. Aumann (\*) Heidelberg Engineering GmbH, Heidelberg, Germany

imaging procedures cover a broad range of experimental and clinical applications. For example, cSLM serves as a multifunctional tool for corneal analysis of laboratory animals [2], it allows for assessing stromal changes in patients with keratoconus before and after cross-linking [3], for experimental full-thickness corneal 3D imaging [4], and for the quantification of morphology of epithelial cell layers and the subbasal nerve plexus (SNP) [5]. Intense research is focused on large-scale image reconstruction of the SNP [6–11], because it has the potential to serve as biomarker for early neurodegenerative changes [12].

While OCT imaging is very promising for corneal cross-sectional imaging, confocal scanning laser microscopy still offers superior lateral resolution combined with higher imaging quality. Various 3D reconstruction techniques of confocal image stacks are published. Volume imaging with a tandem scanning confocal light microscope is demonstrated by research as described in [13], but image quality and resolution are reduced compared to cSLM based methods. Volume imaging with the HRT-RCM is described in [5, 14, 15].

This chapter summarizes the principles of confocal scanning laser microscopy and the technical implementation in the HRT-RCM. A selection of ophthalmological and non-ophthalmological applications is presented and current and future developments are shown.

O. Stachs · R. F. Guthoff

Department of Ophthalmology, Rostock University Medical Center, Rostock, Germany

#### **12.2 Principle of Confocal Scanning Laser Microscopy**

The development of confocal imaging was largely driven by cell biology and the desire to observe biological events in vivo. In the early twentieth century, techniques for fluorescent staining of cellular structures have paved the way for fluorescent microscopy. With the intention to study neurons and their function at a cellular level, Marvin Minsky developed and patented the principle of confocal scanning microscopy in 1957 [16]. The fundamental idea is to illuminate the sample pointwise and to detect the reflected light pointwise, which means that the image build-up takes place sequentially. As a consequence, it was not before the 1980s that the full potential of this technique could be exploited: The rapid progress in laser and scanning technology as well as digital processing and storage abilities paved the way for a more efficient technical implementation of the confocal imaging principle. In vivo imaging at sub-cellular resolution and in real time became feasible.

In conventional microscopy, the specimen is flood-illuminated, meaning that backscattering occurs for the whole illuminated field within the tissue simultaneously. The optical imaging system generates an image where, for each tissue location, also stray light from the vicinity contributes to the final image. This crosstalk leads to a noisy background signal and impacts the visibility of small and faint tissue structures. For a thick specimen, many fluorophores—not only those within a thin slice in depth—contribute to the final image. Therefore, the preparation of very thin samples is needed to get any depth information at all.

Different to that, a confocal setup realizes point illumination and point detection, as is depicted in Fig. 12.1. The specimen is sequentially illuminated point by point either by using an illumination pinhole (also called aperture) or a focused beam. At each tissue location, light is reflected or backscattered and travels the same way back. It is separated from the incident beam by a beam splitter and the intensity is detected by a photo detector, which converts photons into a quantifiable electrical signal. Stray light is blocked by a pinhole in front of the detector. Therefore, only light from a small confined volume within the specimen—the confocal volume—can reach the detector. Out-of-focus light is strongly reduced, improving image resolution and contrast considerably. The detected signal is confined to a very limited depth range within even a thick specimen. Therefore, confocal imaging is commonly referred to as optical slicing or optical sectioning, which underlines its similarity to histology while emphasizing its non-invasiveness. The term confocal refers to the fact, that the planes of illumination and detection are optically conjugated. Both are in focus simultaneously.

**Fig. 12.1** The principle of confocal illumination and detection: Using an aperture, a light source is imaged to illuminate a single spot within the specimen. With the same

objective lens the backscattered (or fluorescent) light originating from this spot is imaged on the detector. The detector pinhole removes all light from outside the focal plane

Confocal reflection imaging reveals the backscattering or reflecting properties of the specimen. Elastic scattering preserves the photon energy, i.e. the wavelength of the backscattered light is not shifted with respect to that of the illuminating light. However, the backscattering efficiency or scattering cross section generally is wavelength dependent. In confocal fluorescence imaging the tissue is either stained with fluorophores or shows intrinsic (auto-) fluorescence. Additional optical barrier filters are needed to block the excitation light, thereby allowing for selectively detecting fluorescence light of a specific wavelength or wavelength range. Some selected applications of confocal fluorescence imaging in ophthalmology can be found in Chap. 2.

In vivo imaging of the eye is particularly demanding due to eye motion, which makes acquisition speed crucial. The first confocal scanning laser ophthalmoscope was presented by Webb et al. [17] in the 1980s, demonstrating imaging of the ocular fundus. Since then, retinal confocal imaging has developed into a routine application in clinical practice. Without any additional measures (i.e. adaptive optics), the optical resolution of retinal images is limited to about 15 μm. Both the finite pupil size and optical aberrations of the eye account for this and imaging at sub-cellular level cannot be achieved.

In contrast, confocal microscopy is feasible for the anterior segment of the eye. As the eye is not part of the imaging system—or only to a very limited extent—microscope lenses with high numerical aperture (NA) can be used. Employing a tandem scanning confocal microscope, Cavanagh et al. demonstrated in vivo imaging of the human cornea in 1989 [18]. Confocal imaging provides a much higher optical resolution and optical sectioning capability than a slit-lamp biomicroscope, which is limited to a magnification of about 40× and an optical resolution of about 20 μm.

Different types of confocal microscopes have been described for use in ophthalmology, which mainly differ in their principle idea to scan the specimen. The tandem scanning confocal microscope (TSCM) uses the basic idea of the Nipkow disc—a rotating disc equipped with a spiral array of pinholes and was originally developed by Petran and Hadravsky [19]. The specimen is illuminated and sampled in parallel by a set of conjugate pinholes arranged along concentric circular traces. A bright light source is required, usually a xenon or mercury arc lamp. This allows for true-color and real-time imaging, but has the disadvantage of a very low light throughput and therefore rather low image quality and contrast. The image is observed with the eye. The system is no longer commercially available.

The concept of the scanning-slit confocal microscope (SSCM) was developed by Thaer et al. [20, 21]. A sheet of light is scanned over the back focal plane of the microscope objective, thereby illuminating the specimen with a slit of light and detecting the backscattered light with a line sensor. The parallel illumination and detection allows for considerably increasing acquisition speed but with the drawback of reduced optical resolution. Confocality is maintained only along one spatial direction, resulting in anisotropic lateral resolution and degraded depth resolution.

Nowadays, laser diodes are commercially available at a variety of different wavelengths from the visible to the infrared region. Due to their compact design and simple control they can easily be integrated into imaging systems. When combined with beam shaping optics, laser can provide small spot sizes and high radiant flux, thereby allowing for a very efficient illumination and detection. Spot illumination eliminates the need for an illumination pinhole. Such confocal point scanning systems have superior signal-to-noise characteristics and allow for a laterally isotropic resolution and for an optimum depth sectioning ability. Nevertheless, in order to reach high frame rates the requirements for the scanning unit are demanding. The HRT-RCM represents a confocal microscope of this type and will be introduced in more detail in the next section.

The theory of confocal microscopy was more formally developed and extended by Wilson and Sheppard [22]. The superior optical performance of a confocal setup is reflected by its point-spread function (PSF). Generally, the 3D PSF describes the intensity distribution in the image space of an optical imaging system resulting from a point source object. Depending on the imaging properties, the point image may be blurred in the presence of optical aberrations, in the ideal case it is limited by diffraction only. In this case, the central maximum of the 3D PSF, which contains nearly 90% of the total energy available, can be described as an ellipsoid of rotation.

For considerations of resolution and optical slice thickness, it is useful to define the area in which the intensity of the PSF in lateral and axial direction has dropped to half of its central maximum, respectively. In the transversal plane, this condition confines an area corresponding to the central bright disc of the concentric diffraction pattern. In the perpendicular plane it defines a double cone or hourglass along the optical axis, i.e. in depth z. The two characteristic measures are given by the beam waist *Δx* and the confocal parameter b, which corresponds to half the Rayleigh range *zR*.

As is depicted in Fig. 12.2, the PSF and thereby the resolution depend on the wavelength and the numerical aperture (NA) of the microscope lens or any other focusing optics. The numerical aperture is a measure to describe the acceptance cone or light-gathering ability of an objective lens. The shorter the focal length of the objective lens and the larger the beam diameter

**Fig. 12.2** The numerical aperture (NA) determines the lateral image resolution and the depth of field. Lateral resolution is characterized by the beam waist Δx of the focused laser spot, depth of field is specified by the confocal range b or Rayleigh range zR

at the entrance pupil of the lens, the higher is the NA. In addition, the acceptance cone can be increased by using immersion objectives, the NA being proportional to the refractive index *n* of the immersion medium.

In contrast to conventional microscopy, the optical performance of the confocal system is determined by two PSFs: the illumination *PSFill*, which describes the intensity distribution of the scanning laser focus in the object space and the detection *PSFdec*, which accounts for the projection of a point object into the image space. Both depend on the wavelength and numerical aperture (NA) as was shown above. In addition, *PSFdec* is determined by the size of the detection pinhole. The imaging properties of a confocal system are defined by the final *PSFtot*, which mathematically is described as the product of both single PSFs. It can be shown that *PSFtot* is always less than or equal to *PSFill*, whereby the size of the detection pinhole plays the decisive role in resolution and depth discrimination. It controls the continuous transition from object information suppressed and such made visible. If the pinhole size is larger than the central disc of the diffraction pattern described above, the resolution is governed by *PSFill* and there is no difference compared to conventional imaging. As the pinhole becomes smaller, *PSFdec* approaches *PSFill*, which results in a superior resolution compared to conventional imaging. It can be shown, that in the limit of *PSFdec* = *PSFill*, the lateral resolution and the depth discrimination are both improved by a factor of about 1.4 [23].

In practice, there is a trade-off between resolution and detection sensitivity. The size of the pinhole is adapted as to narrow the *PSFdec* as much as possible while still allowing enough photons passing through and being detected. The main benefit of confocal imaging is not the rather modest increase in optical resolution. The fundamental improvement results from the suppression of out-of-focus light, which allows to optically slice a specimen. Then, a depth (z) position can be assigned to each image and volumetric imaging for intact and living specimens becomes feasible.

#### **12.3 In Vivo cSLM with the Rostock Cornea Module**

The Heidelberg Retina Tomograph (HRT, Heidelberg Engineering GmbH) was the first commercial confocal scanning laser ophthalmoscope (cSLO) which was widely used in a clinical setting for glaucoma diagnosis. It was brought to market in the early 1990, the next generations of the device, called HRTII and HRT3, were released in 1998 and 2005, respectively.

The HRT uses point scanning to acquire a 3D stack of tomograms of the optic nerve head, which allows for its topographic analysis. A normative database is used to compare individual data with those of healthy and early-glaucoma patients. Follow-up examinations and progression analysis have shown to be highly beneficial in glaucoma care. The device enabled for routine screening in a clinical and medical practice setting.

In the early 2000, Stave et al. from the Rostock Eye Clinic presented an optical attachment to the HRTII [1], by this way allowing for microscopic imaging of the cornea. Based on their experience, Heidelberg Engineering developed the Rostock Cornea Module (RCM) which entered the market in 2004. Designed as an accessory for the HRTII, it allows for using the same imaging platform for different ophthalmological applications. The RCM has also been released for the HRT3. As presented in Fig. 12.3, the RCM is attached to the objective of the cSLO.

The HRT camera head integrates all core components required for confocal imaging—a light source, a scanning unit and a photodetector. Although the principal setup is the same for corneal imaging, the RCM objective changes the imaging parameters in a fundamental way. Image generation is briefly explained in the following section, where differences between corneal and retinal imaging are pointed out.

The schematic optical setup of the HRT-RCM is presented in Fig. 12.4. Essentially, the HRT is changed from a confocal scanning laser ophthalmoscope (cSLO) to a confocal scanning laser microscope (cSLM) by placing the RCM objec-

**Fig. 12.3** The Heidelberg Retina Tomograph (HRT3) is a confocal scanning laser ophthalmoscope which allows for analysis of the optical nerve head. In combination with the Rostock Cornea Module (RCM), which is attached to the HRT, confocal microscopy of the cornea is feasible

tive in the entrance pupil of the HRT. In retinal imaging, the pupil of the examined eye is positioned at this pupil location.

The scanning and detection scheme are the same for both, the HRT and the HRT-RCM. The collimated beam of a red laser diode (670 nm) is deflected by a beam splitter and enters the x-y scanning unit. To acquire a two-dimensional transversal image of the specimen the beam is deflected in two perpendicular directions. A resonant x-scanner supports a line rate of 16 kHz, a galvanometer scanner is used for the slower y-scan, thereby providing a raster scan.

The scan pupil is relayed by means of telescope optics to the entrance pupil of the RCM objective. The core component of the RCM is a water-immersion microscope lens with a high numerical aperture. It provides a focus at a distance of a few millimeters in front of the apex

**Fig. 12.4** Principle setup of the confocal scanning laser microscope (cSLM): The HRT-RCM uses the light source and the scanning and detection units of the HRT. The

RCM objective changes the imaging parameters to allow for microscopic imaging. The water immersion lens is coupled to the cornea with a single-use contact element

of the lens. The cornea or any other tissue under examination is positioned in the focal plane. From each illuminated point, backscattered light travels the same path back and is separated from the illumination path by the beam splitter. The signal is detected by an avalanche photo diode, which has a detection pinhole to ensure confocality. The output signal of the photodetector is sampled with a pixel clock of 12 MHz while the x-y scanning unit performs a transversal raster scan. Each digitized value is assigned to the corresponding pixel in the confocal image. The final image consists of 384 × 384 pixels and represents the backscattering properties of a thin transversal slice within the tissue. A frame rate of about 30 Hz allows for real-time imaging.

For volumetric imaging the focal plane has to be shifted axially to change the depth position of the image. The RCM has a mechanical z-feed, which allows for manual fine adjustment of the focus plane within a range of approximately 1.7 mm. In addition, the focal plane can be shifted optically within a range of 80 μm.

In the following, some important imaging details and components are described in more detail. The interesting reader can find more details on the HRT in Chap. 2. As can be recognized from Fig. 12.4, the pivot point of the scanning unit is optically transferred to the entrance pupil of the RCM objective. This pupil can be seen as the origin of a collimated beam being deflected by virtual scanners. Ideally, it is brought to coincide with the entrance pupil of the microscope lens. The large numerical aperture of the microscope lens (Achroplan 63×/W, NA 0.95, Zeiss) provides a high optical resolution and depth discrimination. The axial position of the focal plane measured from the front lens' apex equals 2.2 mm and is called working distance.

The cornea or specimen is optically coupled to the imaging device using an aqueous gel of high viscosity and a sterile, single-use contact element—the TomoCap. The TomoCap is a thin, transparent cap made of PMMA, which touches the cornea with its flat front surface during the imaging procedure.

In contrast to retinal imaging, where the eye is an integral part of the imaging process, cSLM allows to set the magnification and resolution independent of the imaged object. Essentially, the microscope lens determines the optical parameters and must be selected accordingly.

The cornea has an average refractive index of n = 1.376, the main constituent being water. Therefore, water-immersion microscope lenses are the best choice. They are optimized for imaging in aqueous solution, i.e. for a medium with a refractive index of 1.33. They minimize spherical aberration while imaging through an aqueous specimen and they potentially provide higher numerical apertures compared to non-immersion types, as the focus is sharper and the acceptance angle to collect backscattered light is larger (by a factor of 1.33).

Any change in refractive index along the optical pathway gives rise to back reflection, which introduces signal loss and potentially can lead to ghost reflections. Therefore, the aqueous gel also has to be applied to bridge the air gap between the microscope's apex and the TomoCap.

In the following, the standard workflow of a cornea exam and the main software functionality are described briefly. It is common practice to use topical anesthetic during the imaging procedure to minimize discomfort of the patient. In addition, a drop of eye gel is given into the patient's eye. After the instrument has been prepared the patient is positioned in the headrest. The RCM objective is carefully adjusted to approach the patient's cornea. This procedure is monitored by a small external camera as is depicted in Fig. 12.5 (right). As soon as the TomoCap touches the cornea, cell structures become visible (see Fig. 12.5, left). With the manual z-feed, the focal plane can be shifted through the whole cornea, thereby sequentially imaging single corneal cell layers.

The contact method provides a way to reduce eye motion and enables to assign depth values to the cSLM images due to axial stabilization. The actual focus position is measured with respect to a reference plane, which enables to detect certain cell layers in a more systematic way.

The HRT-RCM supports the acquisition of single images, movies and volume scans. A volume scan (also referred to as z-series) is a sequence of optical sections at an equidistant spacing of about 2 μm. Forty images are collected while the focus of the HRT incrementally changes by means of a stepping motor. This automated volume scan is limited to a depth range of about 80 μm.

The HRT-RCM features a field of view of 400 μm × 400 μm, a lateral resolution of 1–2 μm and an axial resolution of about 4 μm. It thus enables imaging on the subcellular level with almost isotropic resolution. Compared to retinal cSLO imaging the lateral resolution is increased by about the tenfold, the depth of field is decreased by about one hundred. Very sharp and narrow slicing of the specimen is feasible, which is essential to access single epithelial cell layers of the cornea. As was already pointed out, the large numerical aperture (NA) of the microscope lens accounts for the high resolution, whereas in retinal imaging it is limited by the rather low NA of the eye and the diameter of the imaging beam. This principle difference was schematically presented in Fig. 12.2.

Cellular imaging requires a microscope lens with a short focal length, which also limits the working distance (WD). The depth of the layer

**Fig. 12.5** The user interface of the HRT-RCM software displays two images simultaneously: the cSLM image (left) and the monitoring image of the external camera (right)

which can be accessed within the tissue is confined to that distance. A WD of 2.2 mm is sufficient to image cornea, limbus or sclera. But deeper tissue of the anterior segment, e.g. the lens epithelium, can only be imaged with a microscope lens of lower NA, which then reduces resolution.

The lateral field of view (FOV) is determined by the maximum scan angle and the focal length of the microscope lens, which is proportional to the FOV. Therefore, subcellular resolution is feasible only for small FOVs in the order of several hundreds of micrometers. Field of view and resolution cannot be optimized independently.

However, it is possible to stitch single frames to larger field of views. The mosaicking feature of the HRT-RCM software enables the acquisition of realtime composite images. These comprise a maximum area of 64 single frames resulting in a total FOV of about 3 mm × 3 mm. Each single frame is inspected for features that match the composite and is then aligned and added to the composite. In general, random and involuntary eye movement is not sufficient to fill this large frame. The patient needs eye guidance e.g. by a moving fixation target. Composite images have been shown for the subbasal nerve plexus, see the following section.

The cornea is a highly transparent tissue, which backscatters only about 1% of the incident light in the visible range [24]. Its main constituent is water which practically shows no absorption in the visible spectrum. The main contribution to light attenuation while passing through the cornea comes from scattering rather absorption.

In the field of biomedical optics Rayleigh scattering and Mie scattering are commonly used to describe light-tissue interaction.

Rayleigh scattering is referred to as scattering by small particles that have a refractive index different from the surrounding medium and a particle size much smaller than the wavelength. Also, spatial fluctuations in density on this scale may cause continuous variations in refractive index. Rayleigh scattering shows equal intensities for forward and backward scattered light, which are inversely proportional to the fourth power of wavelength.

Mie scattering is commonly referred to as scattering by particles comparable in size to the wavelength. It shows a weaker dependence on wavelength and preferably takes place in the forward direction.

Most biological tissues neither show pure Rayleigh nor Mie scattering: the photons are preferably scattered in the forward direction, i.e. scattering is highly anisotropic [25]. On the other hand, the observed wavelength-dependence is generally stronger than predicted by Mie theory [26]. Therefore, mathematical modeling of laser interaction with tissue or Monte Carlo simulation may be useful to deduce optical scattering properties. Structural modifications in tissue can manifest themselves in changes in confocal signal intensity and it is a major field of research to link macroscopic optical properties with microscopic tissue architecture.

The transparency of the cornea is attributed to an extremely regular structure of collagen fibrils of homogenous diameter [27]. The cornea is about 500 μm thick, of which about 90% consists of collagen fibers, interstitial substance and keratocytes.

Corneal epithelium is comprised of superficial cells, wing cells and basal cells. The epithelial cells have bright cell boundaries and the size varies from about 40 μm for the superficial cells to 8–10 μm in the basal epithelium.

As is depicted in Fig. 12.6 (a) the superficial epithelial cells have a polygonal shape with a bright nucleus and bright cell borders. The wing cells (b, c) appear with bright cell borders and dark cytoplasm. The nucleus can be distinguished only with difficulty. There is minimal variation in shape and size. The cell borders of the basal cells (d) are very bright, the nucleus is not visible. The reflectivity of the cytoplasm is rather inhomogeneous.

Immediately posterior to the basal epithelium an amorphous membrane is located, called Bowman's layer (e). Bowman's layer is made of collagen fibers and contains unmyelinated c-nerve fibers. It is about 10 μm thick and the confocal images appear featureless and grey, with discrete beaded nerve bundles running parallel to the corneal surface.

The keratocyte nuclei of the stroma (f, g) are 5–30 μm in diameter and are hyperreflective. In normal tissue, the collagen fibers and the inter-

**Fig. 12.6** Normal findings in ocular surface tissues: (**a**) superficial cells, (**b**) upper wing cells, (**c**) lower wing cells, (**d**) basal cells, (**e**) subepithelial nerve plexus, (**f**) anterior stroma, (**g**) posterior stroma and (**h**) endothelium

stitium appear as gray amorphous background. In the anterior stroma myelinated nerve fibers can be occasionally visualized at lower density and higher thickness. Between the posterior stroma and the endothelium Descemet's membrane appears as an acellular layer. It has a hazy appearance and becomes more visible with increasing age.

The cornea endothelium (h) is a single layer of hexagonal or polygonal shaped cells, about 4–6 μm thick and 20 μm in diameter, appearing with bright cell bodies and hypo-reflective cell boundaries. The cell nuclei are rather difficult to recognize. With increasing age endothelial cell density reduces and polymegathism increases.

#### **12.4 Ophthalmological Applications**

For conventional microscopic evaluation of biological tissues usually thin slices with 2–5 μm thickness were cut, stained with various chemicals (dyes) and examined by light transmission microscopic techniques at high magnification (400–1000-fold).

For in vivo examination of semitransparent tissues slit lamp biomicroscopy is the standard technique since over a century. The inventor Allvar Gullstrand a Norwegian physicist and ophthalmologist was awarded the Noble price mainly for this breakthrough invention 1911. In this technique, optically cut planes are orientated sagittally and observed by a binocular microscope with the magnification up to 50-fold. With this magnification single cell resolution is impossible. Nevertheless, the majority of corneal diseases could be diagnosed that way and treatment follow up was easily possible in most of the patients.

The confocal in vivo microscopy images deal with the same unprocessed biological structures but


Both phenomena made it necessary to create new standards for the interpretation of images obtained by in vivo confocal microscopy of the cornea. In the meantime, this is documented in various textbook chapters and atlases, e.g. [28].

In general, there are a number of very interesting applications for corneal diagnostics covering a broad range of clinical applications:

	- Corneal grafting

In the following, three examples will be given to underline the clinical benefit of in vivo corneal confocal microscopy. Then, short overviews of cSLM in animal and interdisciplinary research are given respectively.

#### **12.4.1 Diagnoses of Keratomycosis**

Fungal keratitis plays a major role in many developing countries, where early diagnoses and treatment are often not easily available. Even in relatively infrequent infections in industrialized countries proper management of fungal keratitis due to prolonged diagnostic procedures ends up with devastating results [29].

In vivo confocal microscopy has proven to be an excellent way for instant diagnoses avoiding long lasting preparations of sample cultures. CSLM supports the differential diagnosis of non-fungal keratitis with activation of keratocytes and instantly proven fungi in a corneal ulcer, as is demonstrated in Fig. 12.7.

**Fig. 12.7** Slit lamp photograph of corneal ulcers of unknown aetiology (**a**, **b**) and confocal images of these lesions showing spindle shaped activated keratocytes, (**c**)

corresponding to (**a**) and typical fungal elements (**d**) corresponding to (**b**)

#### **12.4.2 Subbasal Nerve Plexus**

The ophthalmological evaluation of the subbasal nerve plexus of the cornea might add value to the diagnosis of a variety of diseases.

Corneal nerves are affected in cases of


Especially, small fiber neuropathy in diabetic patients is a main cause for limb amputation both in the industrialized and in the developing countries [30]. Clinical quantification of the disease with widely accepted methods like neuropathy symptoms score (NSS) and neuropathy deficit score (NDS) as well as skin sensitivity measurements with a so-called monofilament delivers positive results only in patients where a considerable nerve fiber damage has already taken place [12, 31].

In vivo confocal microscopy provides optical slicing parallel to the surface of the cornea. This offers an ideal physical prerequisite to display and to quantify structures of the subbasal nerve plexus in a well-defined anatomical plan. The nerve plexus is located between Bowman membrane and the basal lamina of the corneal epithelial cells (see Sect. 12.6). Numerous publications deal with this structure as a very early surrogate marker for diabetic neuropathy [12]. Examples of normal and rarefied nerve fibers in this layer are given in Fig. 12.8.

Imaging of the subbasal nerve plexus might pave the way for new treatment strategies and more effective prevention of this serious disease. Diabetes-induced complications could thereby reduced.

#### **12.4.3 Corneal Keratocyte: A Neglected Entity of Cells**

Keratocytes are specialized fibroblasts representing about 10% of the volume of the corneal stroma. Their cytoplasm with a total diameter of up to 100 μm is highly transparent whereas

**Fig. 12.8** Single image from the subbasal nerve plexus of the cornea in an individual without diabetes (**a**) and in an individual showing a decrease in the nerve density and nerve length (**b**)

nuclei with a diameter of about approximately 10 μm are the main light scattering elements of the cornea.

Vogt described them as "corpusculi cornea" in his fundamental textbook on slit lamp biomicroscopy in 1930. Keratocytes are placed in a network of highly organized and different shaded collagen lamellas which are also invisible even for confocal microscopy due to their complex 3D arrangement avoiding any light scattering or reflectance.

In vivo confocal microscopy easily displays keratocyte nuclei. The following mean cell densities were found: in the basal epithelium 6000 ± 1080 cells/mm2 , anterior stroma 765 ± 262 cells/mm2 , mid stroma 347 ± 64.4 cells/ mm2 , posterior stroma 315 ± 57.2 cells/mm2 , and endothelium 2720 ± 367 cells/mm2 [32].

There is still no theory why the density of keratocytes varies according to their localization and whether there are clear keratocyte subtypes in different layers of the cornea [33].

Various noxes such as local mechanical ones by epithelial erosions or toxic ones from free oxygen radicals in corneal x-linking change the optical properties of the cytoplasm of keratocytes dramatically, as is depicted in Fig. 12.9. There is evidence that these morphological changes appear in the process of cell death when apoptotic vesicles in the keratocyte stroma turn transparent cellular part into accumulation of highly scattering elements [34]. These ghost cells finally disappear totally and after days are replaced by spindle shaped migrating keratocyte subtypes migrating from adjacent corneal areas.

**Fig. 12.9** Volumetric cSLM representation in an individual after x-linking showing alterations in the anterior part of the cornea (**a**–**c**) and a normal appearance in the posterior part (**d**–**f**)

**Fig. 12.10** Mosaicking cSLM in animal research showing the rabbit endothelium (2.5 mm × 2.5 mm)

#### **12.4.4 cSLM for Animal Studies**

In vivo cSLM can be used for a wide range of applications in animal studies and veterinary research, see Fig. 12.10. Data are available on the normal corneal anatomy of rabbits [35], in rats [36, 37] and mice [38, 39]. Interspecies comparisons were published by Labbe and Reichard comparing the anatomy of laboratory animals (rabbits, rats, mouse) [2, 40].

There are a large number of publications covering the full range of corneal research. In vivo cSLM was used for animal studies to assess the corneal surface after the application of topical drugs or preservatives [41] to analyze corneal response after refractive surgery [42, 43] and alterations after contact lens wearing [44, 45]. Furthermore, a number of studies deal with mice models to investigate diabetes-induced small fiber neuropathy [46, 47]. Hovakimyan et al. have investigated matrix-based regenerating agent for corneal wound healing after collagen cross-linking [34].

#### **12.4.5 Interdisciplinary Research**

The ability of cSLM to generate high-resolution in vivo images of the highly innervated cornea has drawn pronounced interest as it allows for using this innovative technology as a biomarker for disease staging in humans.

For instance, cSLM has visualized and highlighted significant morphological alterations of the subbasal nerve plexus in diverse diseases such as Parkinson's disease and progressive supranuclear palsy [48], amyotrophic lateral sclerosis, chronic migraine [49], multiple sclerosis [50, 51] and amyloid neuropathy [52]. But diabetic peripheral neuropathy, in particular, has been the main focus of various studies [12] because characteristic morphological alterations of the subbasal nerve plexus already occur at an early stage of diabetic peripheral neuropathy [53, 54]. In conclusion, cSLM has the potential to reveal reliably the biomarkers for the early assessment of diabetic peripheral neuropathy (DPN).

An own pilot study has confirmed that changes of the corneal subbasal nerve plexus can be seen as a predictor for diabetic Charcot foot deformity, see Fig. 12.11. The combined clinical assessment in multiple myeloma of peripheral neuropathy in parallel with the investigation of morphological nerve fiber changes using in vivo cSLM is a new approach allowing highly sophisticated detection of morphologic neuronal changes.

In summary, there is a high evidence of cSLM in general clinical practice as a noninvasive method of assessing peripheral neuropathies, monitoring inflammatory states and clinical therapeutic response. The current research discusses the contribution of cSLM in the diagnosis and assessment of diabetes, neurodegenerative diseases, rheumatology, immunology, and chemotherapy. Indeed, further work is needed to evaluate its potential use in the diagnosis and management of the systemic disease.

**Fig. 12.11** Confocal images demonstrating the morphology of SNP from two control subjects (**a**, **b**) and two Charcot subjects (**c**, **d**). **c** and **d** exhibited readily visible changes with decrease in nerve fibers, decrease in nerve branches and connectivity, and presence of widely scat-

tered dendritic cells (**c**). Each scale bar represents 200 μm. (In cooperation with Herlyn & Mittlmeier (University of Rostock) as well as Köhler & Allgeier (Karlsruhe Institute of Technology))

#### **12.5 Non-ophthalmological Applications**

The application of cSLM in non-transparent tissue is limited due to light interaction with human tissue. Light-tissue interactions include reflection and refraction (when light encounters different types of tissue), absorption and scattering of photons. The absorption of photons depends on factors like electron structure of atoms and molecules in the material, the wavelength of the light and the temperature. In biological tissue it is mainly caused by water molecules and macromolecules such as proteins and chromophores.

**Fig. 12.12** cSLM in Gynaecology (**a**, vulva dysplasia, in vitro), in dermatology (**b**, skin epithelium, in vivo) and in Otorhinolaryngology (**c**, taste buds, in vivo)

Tissues are very heterogeneous materials that have spatial fluctuations in their optical properties due to variations in density and refractive index. As a consequence they strongly scatter light and are non-transparent. Imaging of such turbid media is limited by the penetration depth of light. Therefore, cSLM images with cellular resolution can only be obtained at depths up to 300 μm. In Fig. 12.12 tissue imaging in gynecology, dermatology, and otorhinolaryngology is demonstrated.

Several studies have been performed to assess the ability to image the characteristics of special tissue properties. Very interestingly, cSLM has been used to describe the cellular morphology and pathological alterations of the oral cavity, cervix, and esophagus [55–57]. Reflectance imaging of human skin can provide insights of cell morphology and tissue architecture of the epithelium [58, 59].

Furthermore, a number of pathological skin conditions were investigated [60, 61]. CSLM was also performed in the amelanotic epithelial tissue of the gastrointestinal tract [57], lip and tongue [55] and the oropharynx [62]. CSLM can be used as a non-invasive tool in the diagnosis

**Fig. 12.13** Mosaicking cSLM in Otorhinolaryngology showing cellular structure of the mucous membrane (in vivo, 2.5 mm × 2.5 mm)

of sinonasal inverted papilloma, see Figs. 12.13 and 12.14 [63].

A promising application is endoscopic cSLM imaging for diagnostics in schistosomiasis. We were able to detect schistosomal eggs in the urothelium of a patient with urinary schistosomiasis as is demonstrated in Fig. 12.15 [64, 65].

**Fig. 12.14** Upper (**a**) and deeper (**b**) part of the cylindric epithelium of nasal polyps. The almost nucleus free cell bodies show a homogenous appearance (**a**), whereas the

nuclei are densely packed and homogenously organized near the basal membrane (**b**)

**Fig. 12.15** Endoscopic image modality based on the Heidelberg Retina Tomograph (HRTII) (**a**), eggs of Schistosoma mansoni visualized within the mucosal tissue of the large intestine (**b**) and cSLM of the bladder

showing eggs of Schistosoma haematobium (**c**) (in cooperation with Holtfretter, Fritsche und Reisinger, University of Rostock)

#### **12.6 Current and Future Developments**

Nowadays, in vivo corneal confocal microscopy has received a high level of scientific and clinical attention in ophthalmology. The ability to acquire high-resolution images of various cellular structures inside the living cornea non-invasively has inspired the idea of using this technology for diagnostic purposes. A number of innovations are under development in order to assess their diagnostic potential and the usability of the technology.

#### **12.6.1 Subbasal Nerve Plexus Mosaicking**

A special focus has been given on the assessment of corneal nerves and their involvement in ocular and systemic diseases, especially for noninvasive and repeatable techniques which can quantify ocular neurodegenerative changes in individuals with diabetes [12].

Results of older studies in this context were always based on morphometric parameter values derived from single cSLM image with a field of view in the order of 400 × 400 μm2 each. Recent examinations suggest that a robust morphometric assessment requires the analysis of larger areas of the central cornea in order to compensate for a locally inhomogeneous arrangement of the SNP [66, 67].

In this context, a concept for automated and fast control of the focal plane was developed which allows for mosaicking of the subbasal nerve plexus and for increasing the reliability of quantification [68]. This new approach uses a modified RCM in combination with software routines for real-time image acquisition. Furthermore, a new optical design was developed using a piezo actuator which moves a lens inside the modified RCM to control the focal plane without moving the TomoCap. During image acquisition, the focus plane oscillates around the SNP between the basal epithelium and the anterior stroma and uses a cornea tissue classification algorithm

**Fig. 12.16** Large scale SNP mosaicking of a healthy human subject

(CTC). This algorithm distinguishes different tissues and thus delivers the optimal offset position for the piezo oscillation around the SNP. With the new piezo-based RCM fast and defined focal plane shifts are possible. As investigations have shown, the optimized CTC is fast enough for real-time usage. Preliminary results also suggest that the CTC can significantly increase the quality of SNP mosaics through the exclusion of images from other cell layers. The presented concept is promising for large-scale SNP mosaicking (see Fig. 12.16). This will mark a necessary step towards reliable SNP quantification, a promising biomarker for diabetic peripheral neuropathy.

#### **12.6.2 Slit Lamp Microscopy on a Cellular Level Using**  *In Vivo* **Confocal Laser Scanning Microscopy**

Recently, we have presented an in vivo method for volumetric reconstruction of the cornea on a cellular level with volume sizes up to around 250 × 300 × 400 μm3 [69]. For image acquisition the microscope objective is equipped with a piezo actuator. The automated, closed-loop control of the focal plane enables fast and precise focus positioning. Additionally, a novel contact cap with a concave surface is presented. It clearly reduces eye movements, thereby increasing the cuboid volume of the generated 3D reconstruction significantly (see Fig. 12.17). Using the isotropic volume stacks sectional views of any orientation can be generated which opens the window to slit lamp microscopy on a cellular level (see Fig. 12.18).

#### **12.6.3 OCT-Guided** *In Vivo* **Confocal Laser Scanning Microscopy**

Corneal confocal microscopy became a valuable tool for studying the corneal morphology and offers non-invasive in vivo imaging at the cellular level being important for current research. However, the technique is not only limited by the small field of view. It is also difficult to specify the exact cSLM image

**Fig. 12.17** The anterior part of the cornea: sketch (**a**) according to Guthoff et al. [70] and 3D reconstruction (**b**) according to Bohn et al. [69]

**Fig. 12.18** Conventional slit lamp (**a**), histology (**b**) and cross-section after 3D-reconstruction (**c**) exemplifying the potential of the laser based slit lamp

location and orientation inside the cornea. To overcome this limitation, a commercially available multimodal imaging platform was adapted for in vivo OCT-guided cSLM. A microscope lens was attached to a SPECTRALIS with OCT2 module (Heidelberg Engineering, Germany) using a customized, modular lens adapter and a piezo actuator for computerized focus control. The light sources of both modalities (cSLM and OCT) are combined within the camera head, and have a common beam path through the SPECTRALIS objective and the added microscope lens. The optical path length of the OCT reference beam was changed to account for the additional optical component. Multimodal imaging could be performed simultaneously with 8.9 fps and a field of view of 805 × 805 μm2 for cSLM (xy-image) and with 90 fps and a field of view of 805 × 1919 μm2 for OCT (B-Scan, xz-section). Due to the high numerical aperture of the microscope lens the depth of field is very limited for both images, the cSLM and the OCT. The focal plane can be recognized in the OCT cross-section as bright surface, which reveals the actual depth position of the cSLM image (see Fig. 12.19). Simultaneously, the cornea's anterior and posterior interface can be visualized because of their strong backscattering and the high sensitivity of OCT.

The piezo actuator can shift the cSLM focal plane up to 600 μm whereas the image position and orientation can be tracked in real-time. Compared to the conventional state of the art, the OCT-guided cSLM concept significantly improved the usability. Real-time assessment of the cSLM image plane location and orientation inside the cornea by means of the OCT cross-section enables an improved location-based diagnosis. For the first time, it is possible to specify the angle between the corneal surface and the cSLM image. Further effort is necessary for optimizing the system design and OCT scan patterns.

#### **12.6.4 Multiphoton Microscopy**

Corneal cell differentiation in vivo can be performed only on a morphological basis, and in the majority of cases, this is not sufficient using cSLM. This window could be opened by multiphoton microscopy. The non-linear interaction mechanisms, such as multiphoton absorption or frequency conversion, can be induced by using a highly focused pulsed laser. Using this advanced technology, induced autofluorescence, second-harmonic generation or fluorescence lifetime measurements can be used to produce cellspecific information with subcellular resolution [71–73].

**Fig. 12.19** Example for OCT guided cSLM: confocal image (left) and cross-sectional OCT image (right). The bright reflection within the OCT image reveals the posi-

tion of the confocal en-face image with respect to the cornea's anterior and posterior interface

#### **12.7 Summary**

In summary, state-of-the-art cSLM allows to evaluate the ocular surface at a cellular level with subsequent 2D mosaicking or 3D reconstruction. But only a close and straightforward cooperation between basic as well as clinical science and industry partners may bring about the fulfillment of the great promise offered by this technology.

#### **References**


confocal microscopy study. Br J Ophthalmol. 2007;91(9):1165–9.


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Anterior Segment OCT**

**13**

Jacqueline Sousa Asam, Melanie Polzer, Ali Tafreshi, Nino Hirnschall, and Oliver Findl

#### **13.1 Introduction**

OCT technology was initially introduced to the ophthalmic field for imaging of the posterior segment such as the retina and the optic nerve head. However, advancements to the technology made it possible for image acquisition of the ocular surface and anterior segment. Imaging of these anatomical structures has shown to be of significant clinical relevance. Anterior segment OCT imaging allows for visualization and assessment of anterior segment ocular features, such as the tear film, cornea, conjunctiva, sclera, rectus muscles, anterior chamber angle structures, and lens [1].

The first commercial OCT systems specifically designed for anterior segment imaging were the Zeiss Visante OCT™ (Carl Zeiss Meditec, Dublin, CA, USA) and the Slit-Lamp OCT (SL-OCT, Heidelberg Engineering GmbH, Heidelberg, Germany). Both of these devices received clearance by the United States Food and Drug Administration (FDA) in 2005 and 2006, respectively. These time-domain OCT devices employed longer wavelength light sources (both Visante OCT™ and SL-OCT: 1310 nm) and provided images with relatively high axial ranges and penetration but at the cost of axial resolution (between 18 and 25 μm) and scan speed (2000 A-scans/second with the Visante OCT system, and 200 A-scans/second with the SL-OCT) [2]. With the commercial introduction of spectraldomain OCT (SD-OCT) technology, imaging of the anterior segment at much higher speeds (> 25,000 A-scans/second) and better axial resolution became a possibility [2]. However, these commercial SD-OCT devices use shorter wavelength light sources (820–880 nm), optimized for posterior segment imaging, resulting in a more limited image depth range and potentially lower penetration of deeper structures.

Swept-source OCT (SS-OCT) technology combined with a longer wavelength light source offers inherent characteristics that are suitable for anterior segment imaging and analyses. While a longer wavelength light source allows for imaging along a large image depth, the SS-OCT technique allows for minimal sensitivity roll-off along this depth. This combination and the very short acquisition time allow for high-contrast images of the entire anterior chamber, until the posterior surface of the lens [1, 2]. The images provide visualization and analytics of clinically relevant anterior segment structures of the human eye on one device.

J. S. Asam · M. Polzer · A. Tafreshi (\*) Heidelberg Engineering GmbH, Heidelberg, Germany

N. Hirnschall · O. Findl Vienna Institute for Research in Ocular Surgery (VIROS), Hanusch Hospital, Vienna, Austria

These structures include the cornea, anterior chamber, iris, and lens. A long wavelength light source combined with SS-OCT technology can also serve as a tool to measure the axial length of the human eye. The above-mentioned structures and parameters are widely used in ophthalmology for assessment of clinically relevant parameters such as corneal topography, corneal tomography, anterior segment analysis, biometry and calculation of intraocular lens power.

#### **13.2 Anterior Segment Spectral-Domain OCT (SD-OCT)**

The SPECTRALIS Anterior Segment Module (ASM) is an add-on lens, accompanied by a software package that can be added to the SPECTRALIS SD-OCT device. The ASM offers high-resolution cornea, sclera and anterior chamber angle images.

The lateral scan range varies from 8 to 16 mm with two different scan patterns: single and raster. There are predefined scan patterns depending on the application (cornea, anterior chamber angle, and sclera) (Table 13.1). The Heidelberg Noise Reduction and the TruTrack Active Eye Tracking allow for enhanced detailed images and precise alignment, respectively. The software also provides an interactive zoom function, which can be positioned during live image acquisition, on the area of interest.

The layers of the cornea can be seen in detail with the SPECTRALIS ASM, aiding in the assessment of corneal thickness (Fig. 13.1). These high-resolution images add clinically relevant information for the detection and management of various corneal abnormalities including, but not limited to, corneal opacities, corneal scars, and corneal dystrophies. Such images are also useful in the planning or post-surgical evaluation of penetrating and lamellar keratoplasties, and refractive surgeries (Fig. 13.2). It has been shown that ultra-high-resolution OCT

**Table 13.1** SPECTRALIS ASM Module, Heidelberg Engineering


*OCT* optical coherence tomography, *cSLO* confocal scanning laser ophthalmoscope

**Fig. 13.1** SPECTRALIS ASM. Healthy cornea

**Fig. 13.2** SPECTRALIS ASM. Cornea after refractive surgery, showing the LASIK flap

**Fig. 13.3** SPECTRALIS ASM. Sclera, anterior chamber angle and cornea of a healthy patient

**Fig. 13.4** SPECTRALIS ASM. Sclera after trabeculoplasty, showing the filtering bleb

(spectral-domain) images are able to detect *Acanthamoeba* disease in the cornea, aiding the diagnosis of this rare keratitis in a non-contact and patient-friendly manner [3, 4]. Another application is the visualization and quantification of corneal haze after cross-linking, which can be compared to the normal healing process [5]. Cytomegalovirus associated keratitis can also be imaged with OCT technology, presenting with a dendritic, dome-shaped, quadrangular, or saw-tooth appearance as well as a coin-shaped structure [6].

Images of the scleral anatomy may be clinically useful in the diagnosis and management of some diseases such as scleral and conjunctiva neoplasia, and inflammation. Surgical procedures affecting the scleral structure, such as a filtering bleb post glaucoma surgery, may also be evaluated for functional efficacy with high-resolution images (Figs. 13.3 and 13.4).

The 16 mm white-to-white scan offers an image where two anterior chamber angles across a single OCT B-Scan can be seen, facilitating anterior chamber angle assessment. In addition, manual measurement tools can be used to measure the cornea, sclera and chamber angle structures. Both ACA (anterior chamber angle) and AOD (angle opening distance) measurements are available, adding clinically relevant information for the assessment of narrow or closed angles (Fig. 13.5). These measurement tools are not 510(k) FDA cleared.

**Fig. 13.5** SPECTRALIS ASM, anterior chamber angle measuring tools. ACA (anterior chamber angle) and AOD (angle opening distance) of a healthy patient

#### **13.3 Anterior Segment Swept Source OCT (SS-OCT)**

ANTERION utilizes high-resolution SS-OCT images to combine corneal examination, eye and anterior segment biometry, IOL (intraocular lens) calculation and anterior segment imaging via four different applications: Cornea App, Cataract App, Metrics App and Imaging App. The SS-OCT technology at 1300 nm delivers high-resolution images of the anterior segment along a large image depth, at a fast acquisition speed. The detailed images of the anterior surface of the cornea to the posterior surface of the lens and the anterior chamber angle allow for reliable measurements needed in planning of anterior segment surgeries, such as cataract surgery, refractive surgery and keratoplasty. In addition, the high quality images serve as a diagnostic aid for various anterior segment abnormalities. Table 13.2 summarizes the measurement parameters and features while Table 13.3 indicates the technical specifications of each ANTERION app.

#### **13.3.1 SS-OCT and Cornea Evaluation**

Fast acquisition of images using the SS-OCT technology allows for high-resolution imaging with reliable resultant measurements. A total of 65 radial B-Scan images (256 A-scans per B-Scan) are acquired in less than one second using the ANTERION Cornea App. All corneal maps are 8 mm in diameter and are generated from SS-OCT image data. These maps include: anterior and posterior axial curvature, tangential curvature and elevation maps, as well as total corneal power map, anterior and total corneal wavefront and pachymetry maps (Fig. 13.6). In addition, a detailed wavefront parameter analysis with aberration quantification is demonstrated for both anterior and total corneal wavefront. It has been shown that SS-OCT pachymetry results are more repeatable than Scheimpflug imaging results, in healthy corneas [7]. Furthermore, SS-OCT has been shown to have higher reliability and repeatability than high-resolution Scheimpflug imaging in patients who have undergone corneal grafting [8].

The corneal asymmetry maps and parameters may be used to detect abnormalities that present asymmetrically, e.g., in early corneal ectasia (Fig. 13.7). In addition, corneal maps and parameters can be shown as a series of follow-up examinations, facilitating the ability to identify changes over time, for example after surgical procedures and orthokeratology (Fig. 13.8).

OCT measurements and Scheimpflug imaging are the gold standard for keratoconus diagnostics (Fig. 13.9) [9]. The SS-OCT measurement techniques have been shown to detect keratoconus in early stages, with improved accuracy compared to other methods [10].

Evaluation of corneal surgical interventions with SS-OCT covers a large variety of techniques, such as refractive surgery, keratoplasty, intrastromal corneal rings and others (Fig. 13.10) [1, 11, 12]. In general, SS-OCT measurements have been shown to have better repeatability of corneal biometrical measurements compared with Scheimpflug, for corneal graft evaluation [8]. Due to this advantage, SS-OCT imaging is being routinely accepted as the method of choice for such measurements, gradually replacing other imaging techniques. High-resolution OCT images also allow for a detailed analysis of the post-operative morphology of the cornea, including comparison


**Table 13.2** ANTERION parameters and features per application



**Table 13.3** ANTERION technical specifications per application

*AS* anterior segment, *NA* not applicable

**Fig. 13.6** ANTERION Cornea App single view. Pachymetry map, map options and corneal parameters of a healthy eye

**Fig. 13.7** ANTERION Cornea App both eyes (OU) view. Total corneal power maps of both eyes, differential map (center below) and parameters (table on the right) of a healthy patient

**Fig. 13.8** ANTERION Cornea App follow-up view. Anterior axial curvature maps of the same eye with differential map and progression analysis of a healthy patient

**Fig. 13.9** ANTERION Cornea App multiview. Five corneal maps (anterior tangential curvature, anterior elevation, corneal pachymetry, posterior elevation and anterior

axial curvature), OCT image and wavefront parameters of a keratoconus eye

**Fig. 13.10** ANTERION Imaging App. Intrastromal corneal ring in a keratoconus eye

between follow-ups. Furthermore, the demarcation line seen after cross-linking can be detected using SS-OCT technology, but not with the slit lamp exam [13].

Evaluation of the tear film is another possible application of SS-OCT on the cornea. Fukuda et al. [14] and Akiyama et al. [15] showed significant correlations between the tear film meniscus and vital staining scores and Schirmer test results. In a more recent study, the effect of different dry eye treatment options was also evaluated using SS-OCT. [16] Therefore, SS-OCT technology is a potentially useful method to document tear film abnormalities as well as being able to monitor treatment.

OCT measurements have also been shown to allow for distinction of bacterial, viral and fungal keratitis from retrocorneal plaques. In cases with bacterial and viral keratitis, there is a distinct boundary between the corneal endothelial surface and the plaque. This boundary is more diffuse in cases with fungal keratitis [17]. In the case of mild to moderate corneal scaring due to keratitis or other causes, visual quality is often severely decreased. Recently, it was found that the decrease in visual acuity in these patients is not only caused by the cornea opacification itself, but also due to the increased higher-order aberrations generated by an irregular corneal surface, which may be treated with rigid gas-permeable lenses [18, 19]. SS-OCT measurements may be used to detect corneal irregularities in the future and to help in the decision making process of determining whether contact lenses are sufficient or if surgical intervention is necessary.

Novel full-field and ultra-high-resolution OCT devices have recently evaluated human corneas, and showed the possibility to image even nuclei of the corneal endothelium [20]. Another novel application is the use of OCT angiography for evaluating and quantifying corneal vascularization. Several studies have shown that this measurement technique is equivalent to conventional angiography [21–23].

#### **13.3.2 SS-OCT and Cataract Evaluation**

Some anterior segment SS-OCT devices provide the visualization of the whole crystalline lens (Fig. 13.11), showing utility for cataract quantification and density documentation [24]. Furthermore, lens measurements using this technology have been shown to be highly reproducible [25]. Until now, not all types of cataracts can be detected and measured with equal accuracy. Quantifying and documenting cataracts with SS-OCT could be useful in identifying eyes with good visual acuity, but symptomatic of having haziness, glare and haloes. These eyes may benefit from cataract surgery. This is especially the case for anterior cortical cataract cases, which often allow for good visual acuity but have significantly increased stray light levels.

One of the most significant applications of SS-OCT technology is optical biometry. SS-OCT with a wavelength of 1055 nm allowed for axial length measurements with a significantly better penetration through dense cataracts compared to conventional optical biometry techniques [26]. The SS-OCT ANTERION with a wave-

**Fig. 13.11** ANTERION Metrics App. Dense cataract OCT image

length of 1300 nm is able to perform corneal, anterior chamber, lens and axial lens measurements (Fig. 13.12). The ability to include all of these parameters and the IOL (intraocular lens) calculation can improve workflow efficiency (Fig. 13.13).

Toric intraocular lens calculation is based on measurements of the anterior surface of the cornea. More recently, estimation algorithms have been introduced for the posterior surface of the cornea, depending on the steep axis of the astigmatism on the anterior surface of the cornea [27–30]. These estimations have resulted in significant improvements in toric IOL power calculations, compared with conventional ones that do not include measurements of the posterior cornea [31]. ANTERION offers a toric IOL calculator, taking the incision location and surgical induced astigmatism (SIA) into account, as well as enabling the surgeon to use the total corneal power as the corneal parameter for the calculation (Fig. 13.14). Further improvements can potentially be achieved, if the post-operative tilt of a toric IOL is predicted using pre-operative

**Fig. 13.12** ANTERION Cataract App both eyes (OU) view. Total corneal power maps, anterior segment OCT section image and OCT intensity graph, axial length diagram and anterior segment parameters of a patient with cataract


**Fig. 13.13** ANTERION Cataract App. Spheric IOL (intraocular lens) calculator

**Fig. 13.14** ANTERION Cataract App. Toric IOL calculator

SS-OCT measurements of the crystalline lens [32]. The concept behind this idea is that the post-operative tilt of an IOL has an influence on the post-operative refraction (and even more on higher-order aberrations). This knowledge can be used to adapt toric IOL power calculations in a ray tracing model to improve the post-operative refractive outcome.

Intra-operatively, OCT measurements of aphakic eyes have been shown in some publications to lead to better prediction of the post-operative IOL position compared to pre-operative measurements [33, 34]. On the other hand, SS-OCT images can be also used to evaluate and confirm the correct position of the implanted IOL after cataract surgery (Fig. 13.15).

Furthermore, OCT measurements have been shown to be useful for size calculation of posterior chamber phakic lenses, such as the implantable collamer lens [35]. This may help avoid excessive vaulting of such IOLs as well as touching of

**Fig. 13.15** ANTERION Metrics App. Posterior chamber intraocular lens after cataract surgery with anterior vitreous visualization

the IOL to the crystalline lens. The post-operative IOL location can be visualized and followed up with high-resolution images (Fig. 13.16).

#### **13.3.3 SS-OCT and Anterior Chamber Evaluation**

One of the main application areas for SS-OCT is to evaluate the anterior chamber angle [36]. It has been shown that SS-OCT measurements are comparable to gonioscopy, with the additional benefit of better documentation and a non-contact technique [37]. However, it should be mentioned that indentation gonioscopy should not be replaced by OCT.

Due to a large dynamic range and image size, ANTERION images present high scatter in the iris and low scatter in the sclera and lens in one single image, allowing for good visualization of the anterior chamber structures. In addition to conventional anterior chamber angle parameters, such as ACA, SSA, AOD and TISA, lens measurements (lens thickness and lens vault) are also available. Information about the anterior chamber (as depth, volume, ACA distance and spur-to-spur distance) and cornea (central corneal thickness and white-to-white) are displayed as well (Figs. 13.17 and 13.18). This helps in assessing the anterior chamber angle architecture and its changes over time or after treatment.

**Fig. 13.16** ANTERION Metrics App. Phakic posterior chamber intraocular lens and peripheral patent iridotomy

**Fig. 13.17** ANTERION Metrics App single view. OCT image and anterior chamber parameters of a healthy patient with open anterior chamber angle

**Fig. 13.18** ANTERION Metrics App multiview. OCT images of the whole anterior chamber and 360° ACA 500 and TISA 750 graph analysis

SS-OCT assessment of the anterior chamber angle is an especially useful measurement technique due to its short measurement time and its non-contact nature. Repeated measurements of the angle can be useful to evaluate diurnal changes [38]. Furthermore, can be used to evaluate the iris and angle configuration before and after laser iridotomy. With high-resolution SS-OCT imaging,

**Fig. 13.19** ANTERION Imaging App. Conjunctival nevus with also visualization of the iris, anterior chamber angle, ciliary body and rectus muscle

it is also possible to detect changes of the iris in neovascular glaucoma [39]. SS-OCT imaging offers other potential anterior chamber applications such as flare measurement in the anterior chamber, which has been shown comparable to laser flare meter measurements [40].

#### **13.3.4 SS-OCT and Anterior Segment Imaging**

There is a variety of other applications for SS-OCT imaging of the anterior segment. The long wavelength enables visualization of the anterior segment along a large image depth, ranging from the anterior surface of the cornea to the posterior surface of the lens. Other ocular structures, such as the sclera, ciliary body and rectus muscles can also be well visualized. The visualization of these structures enables the diagnosis of abnormalities, such as conjunctival nevus (Fig. 13.19). Evaluating the presence and extent of penetration of the cornea and sclera in trauma cases is another important application that is enabled by SS-OCT imaging [41].

#### **13.4 Summary and Outlook**

The impact and potential applications of anterior segment OCT in clinical practice has been steadily growing. Technological improvements such as acquisition speed and image resolution have made OCT imaging a key part of clinical evaluation, not just for the cornea, but for the whole anterior segment. Multiple scans and precise measurements are very important, allowing for diagnosis and follow-up with high confidence.

Spectral-domain OCT allows for highresolution images of the anterior segment. With these images, the SPECTRALIS Anterior Segment Module provides visualization of the corneal layers in detail, with good histopathological correlation. Some important clinical applications include: LASIK flap evaluation, keratoconus treatment (e.g., corneal rings and crosslinking), Descemet membrane detachments, and corneal transplantations.

ANTERION combines swept-source OCT (SS-OCT) technology and a longer wavelength light source of 1300 nm to optimize image acquisition of anterior segment structures, with reliable resultant analyses. The combination of a 1300 nm wavelength light source and SS-OCT technology, allows for minimal sensitivity rolloff along a large dynamic range, resulting in high-contrast images of the entire anterior segment of the eye and the posterior lens. This combination results in detailed visualization and analytics of anterior segment structures using one device. These structures include the cornea, anterior chamber and angle, iris, and lens. ANTERION is also able to measure the axial length of the human eye.

ANTERION can substantially improve the workflow in clinics as it is a real all-in-one solution for anterior segment examination. The cornea examination, eye and anterior segment biometry, IOL calculation and anterior segment imaging can be performed quickly, facilitating the acquisition and optimization of data analyzed by ophthalmologists. The ability to add new features and functionalities reinforces the potential advancements of this technology. The modular design of the ANTERION offers clinicians the ability to tailor the device to their practice needs, with the capability to add features when indicated.

In summary, SS-OCT technology combined with a longer wavelength light source is optimal for anterior segment imaging and measurements. It is likely that this combination will replace other technologies in the near future and new application areas will be found.

#### **References**


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Femtosecond-Laser-Assisted Cataract Surgery (FLACS)**

**14**

Hui Sun, Andreas Fritz, Gerit Dröge, Tobias Neuhann, and Josef F. Bille

#### **14.1 Introduction**

We obtain more than 80% of our information from the external world through vision. Good vision depends on the cornea and lens as refractive components. The crystalline lens is a transparent, biconvex structure in the eye that, along with the cornea, helps to refract light to be focused on the retina. Maintenance of lenticular shape and transparency is critical for refraction. The lens accounts for about one-third of the total refractive power of the eye. A slight change in the lenticular contour can result in refractive error. Small changes in the transparency or the shape of the lens can also cause visual distortion.

The lens is a part of the anterior segment of the eye. Anterior to the lens is the iris, which can control the amount of light that enters the eye.

H. Sun

Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing, China

A. Fritz · G. Dröge University of Chinese Academy of Sciences, Beijing, China

T. Neuhann Augenklinik am Marienplatz, Munich, Germany

J. F. Bille (\*) University of Heidelberg, Heidelberg, Germany The lens is suspended in place by the zonules. Posterior to the lens is the vitreous body. The lens has an ellipsoid, biconvex shape. The anterior surface of the lens is less curved than its posterior surface. Normally, in adults the lens is 10 mm in diameter and has an axial length of about 4 mm.

The lens has three main parts: the capsule, the epithelium, and the fibers. The lens capsule forms the outermost layer of the lens, and the lens fibers form the bulk of the interior. The lens epithelium is a simple cuboidal epithelium that is located in the anterior portion of the lens between the capsule and the fibers. The lens itself lacks nerves, blood vessels, or connective tissue. The refractive index of a human lens varies from about 1.406 in the center down to 1.386 in less dense part of the lens.

#### **14.2 Cataract and Surgery**

The most prevalent ocular disease and the major cause of blindness in the world is cataract. It is the third leading cause of preventable blindness in the Unites States. It is also the most frequently cited self-reported reason of visual impairment. Every year, there are more than eight million physician office visits due to visual disability from cataracts. Either clumps of protein or yellow-brown pigment may be deposited in the lens, reducing the transmission of light to the retina. Such opacity of the lens, whether it is a small local opacity or a diffuse general loss of transparency, is called a cataract. It must cause a significant reduction in visual acuity or a functional impairment to be clinically significant. Signs and symptoms of cataracts may include faded colors, blurry vision, and halos around lights, trouble with bright lights, and trouble seeing at night. Cataracts are most commonly caused by aging but may also occur due to diabetes mellitus, drugs, ultraviolet (UV) radiation, smoking, alcohol and nutrition. The three common types of this disease are nuclear, cortical, and posterior sub-capsular cataract. Visual inspection and assignment of numerical values to indicate severity is used to grade cataracts. A slit lamp is the regular tool for such an examination. The Oxford Clinical Cataract Classification and Grading System, the Johns Hopkins system, and the Lens Opacity Classification System (LOCS, LOCS II, and LOCS III) are alternative grading systems advocated for use in epidemiological studies of cataract [1–3]. Most people over the age of 60 have some degree of cataract formation. The extent of people's visual disability determines the treatment decision. If the cataract is not serious, surgery may not be needed. But, usually, there is no alternative to cataract surgery to correct visual impairment. In most cases, the standard of care in treating cataract is removal of the cataract by two types of surgical procedures: phacoemulsification (PE or phaco) and extra capsular cataract extraction. Following surgical removal of the crystalline lens, an artificial intraocular lens (IOL) is implanted. Cataract surgery is normally performed by an ophthalmologist with a fast procedure that causes little or no discomfort to the patient.

PE or phaco is the most common technique used in the United States today. It uses a machine with an ultrasonic probe to remove cataract. After the opening incision and anterior capsulotomy, an ultrasonic probe emulsifies the hard nucleus, enabling the ophthalmologist to remove the lens material using a suction device. The physical depth of the anterior chamber is maintained during this procedure. The opening is then enlarged to allow insertion of a posterior chamber IOL into the capsular bag. An IOL is usually implanted into the eye either through a small incision using a foldable IOL or through an enlarged incision using a polymethylmethacrylate (PMMA) lens.

After cataract surgery, the patient will be instructed to use anti-inflammatory eye drops for a few weeks. The eye will be mostly recovered within a week, and complete recovery takes about a month. This surgery has a high success rate and is the most common ophthalmic surgery procedure, with ~19.5 million procedures performed worldwide in 2011.

#### **14.3 History of Femtosecond-Laser-Assisted Cataract Surgery**

The application of ultrashort laser pulses to invivo ablation of cataractous lens tissue was first proposed in 1992 (US patent 5,246,435, Sept. 21, 1993 (J.F.Bille, D.Schanzlin): "Method for Removing Cataractous Material") and a related FDA-regulated initial clinical study was performed at the eye clinic of the University of Saint Louis (Fig. 14.1). The abstract of US 5,246,435 reads:

"A method for using an ophthalmic laser system to remove cataractous tissue from the lens capsule of an eye requires phacofragmentation of the lens tissue and subsequent aspiration of the treated tissue. More specifically, a cutting laser is used to create various strata of incisions through the lens tissue. Within each stratum, each incision is made in the direction from a posterior to an anterior position. The strata are stacked on each other in the posterior-anterior direction, and each includes a plurality of minute incisions. The most posterior stratum of incisions is created first by referencing the cutting laser back into the lens tissue from the posterior capsule. Subsequent, more anterior strata are created by referencing the cutting layer from the tissue treated by the previous stratum of incisions. In each stratum, the vapors which result from the incisions are allowed to infiltrate between the layers of the lens tissue to fragment and liquify the tissue. The liquified lens tissue is then aspirated."

**Fig. 14.1** Method for removing cataractous material, U.S. patent no. 5,246,435 [4]

Shortly before the expiration date of US patent 5,246,435, the commercialization of Femtosecond-Laser-Assisted cataract surgery was initiated by several ophthalmic laser companies.

#### **14.4 All-Solid-State Chirped-Pulse-Amplification Femtosecond Laser**

A femtosecond is the SI unit of time equal to 10−15 of a second. A femtosecond laser refers to a laser with pulse duration in the femtosecond range. Different types of lasers can produce femtosecond pulse, such as dye lasers, solid-state lasers, and fiber lasers. A solid-state laser is a laser that uses a gain medium that is a solid, and not a liquid, as in case of dye lasers, and not a gas, as in gas laser. Semiconductor-based lasers are also in the solid state but are generally considered as a separate class of solid-state lasers. There are many hundreds of solid-state media in which laser action has been achieved, but relatively few types are widely used. Of these, the most common types are probably neodymiumdoped glass (Nd:glass) and neodymium-doped yttrium aluminum garnet (Nd:YAG). Typically, solid-state lasers are optically pumped, using either a flash lamp or laser diodes. Diodepumped solid-state lasers tend to be much more efficient, and have become much more common as the cost of high-power semiconductor lasers has decreased. A femtosecond laser has the following basic elements: a broadband gain medium, a laser cavity, an output coupler, a dispersive element, a phase modulator, and a gain/ loss process controlled by the pulse intensity or energy. The listed components are crucial for the function of the system. For example, the gain rod in an Nd:glass laser can cumulate the functions of gain, phase modulation, loss modulation, and gain modulation. The generation of femtosecond pulses often involves a dispersive mechanism of pulse compression; phase modulation broadens the pulse bandwidth, and dispersion eliminates the chirp and compresses the pulse.

A single femtosecond laser pulse is employed to ablate eye tissue to achieve enhanced precision and minimize collateral tissue effects during regular femtosecond-laser eye surgery. The threshold for a pulse-duration range of a few hundred femtoseconds is about 1–2 J/cm2 [5]. The singlelaser pulse energy from the oscillator cannot satisfy this threshold requirement even though the laser beam can be focused very well. A technique called chirped pulse amplification (CPA) is used to amplify a femtosecond laser pulse and satisfy the intensity requirement for laser in situ keratomileusis (LASIK) surgery (Fig. 14.2). CPA is the current state-of-the-art technique that almost all of the highest-power lasers in the world currently utilize. It was originally introduced as a technique to increase the available power in radars in 1960s [6]. Because the nonlinear processes, such as self-focusing, can cause serious damage to the optic components if a femtosecond

**Fig. 14.2** Left: High repetition rate femtosecond pulse train. Right: Energy buildup during amplification and output signal

**Fig. 14.3** Chirped pulse amplification (CPA) technique

laser pulse is directly amplified, the peak power of femtosecond laser pulses is limited before CPA can be achieved. CPA for femtosecond lasers was invented by Strickland and Mourou in the 1980s [7]. When employing the chirped pulse amplification technique, typically an increase of the pulse energy from the nJ to the mJ level at repetition rates of the order of kHz can be achieved, i.e., average powers of ~W are obtained. A scheme of the chirped pulse amplification technique is shown in Fig. 14.3.

In order to keep the intensity of a femtosecond laser pulse below the threshold of the nonlinear effects inside the amplifier, the femtosecond laser pulse prior to introducing it to the amplifier is stretched out in time using a pair of gratings that are arranged so that the low-frequency component of the laser pulse travels a shorter path than the high frequency component does. This technique is called group velocity dispersion (GVD). After going through the grating pair, the laser pulse becomes positively chirped. The high-frequency component lags behind the low-frequency component and has longer pulse duration than the original one by a factor of 103 –105 . Then the stretched pulse, whose intensity is sufficiently low, is safely introduced to the amplifier and is amplified by a factor of 103 or more [8]. Finally, the amplified laser pulse is recompressed back to the femtosecond range through the reversal process of stretching, achieving a peak power orders of magnitude higher than those the laser systems could generate before the invention of the CPA technique. The physics concept behind the stretcher is GVD, which causes a short pulse of light to spread in time as a result of different frequency components of the pulse traveling at different velocities. There are three regular ways to achieve GVD for a femtosecond laser pulse: prisms, gratings, and a Gires-Tournois interferometer. However, a typical femtosecond laser CPA system requires that the pulse be stretched to several hundred picoseconds, which means that the different wavelength components must experience about 10 cm differences in path length. The most practical way to achieve this arrangement is with the grating-based stretcher and compressor.

#### **14.5 Femtosecond Laser Application Systems for Clinical Use**

To accurately control the power of ultrashort laser pulses for applications in ophthalmology, the laser system needs to be coupled with a precise, fast deflecting, and focusing unit as well as a high-contrast microscope suiting the needs of an ophthalmic surgeon.

After exiting the laser, the beam is coupled into the application arm. A mechanical shutter blocks the laser and opens only during the laser procedure. Each laser procedure is in principle defined by a three-dimensional data array of volume-elements (voxels) that will be ablated and a corresponding timeline that defines the ablation sequence. Consequently, the laser focus has to be precisely positioned in all three dimensions. For that purpose, a fully computercontrolled mirror scanning unit is employed (see Fig. 14.4). Optimized scan patterns are generated from a simple set of user-defined parameters (e.g., flap thickness and diameter, hinge angle in pre-LASIK cutting of a flap) and performed by real-time control hardware. Behind the scanner unit, the beam passes an expanding telescope, increasing the laser beam diameter to achieve a tight focus behind the cutting lens. As the laser fluence has to be above the respective threshold for plasma-mediated ablation, the laser beam needs to be focused to a very small spot size of several micrometers to achieve an exact ablation. According to physical laws of optical lenses, the focus spot size of a beam decreases with larger entrance aperture of the focusing lens. The lateral ablation zone of the demonstrated scanning unit has a diameter of up to 10 mm in the cornea, with a focus shift range in z direction of up to 3 mm. A schematic of the complete application system setup is shown in Fig. 14.4.

A surgical microscope, which is adapted to the system, provides the surgeon with a binocular, stereoscopic image to follow the process of the procedure. To support the handling needs in various surgical procedures, different field-ofview settings are provided by the microscope. In addition, a CCD camera is integrated into the microscope for monitoring and recording of the laser procedures. In case of FLACS systems, a computer-guided laser linked to an optical imaging system (e.g. OCT) performs the corneal incision, capsulotomy, and lens fragmentation steps, as well as the setup and the in-vivo control of the surgical procedure. In the next para-

graph, the latest swept-source OCT technology which was developed by Heidelberg Engineering GmbH, Heidelberg, Germany, and is applied in the VICTUS Femtosecond Laser Platform, is described.

Femtosecond laser systems have successfully entered the cataract surgery market since LenSx (Alcon Laboratories Inc.) introduced its first commercial system in 2008 as a promising new technological advance, which plays an everincreasing role in cataract surgery, automating the three main surgical steps: corneal incision, capsulotomy, and lens fragmentation. It is an innovative, growing new technology for cataract surgery due to the enhanced precision and minimized collateral tissue effects from femtosecond laser ablation. This attribute of femtosecond lasers is especially important for femtosecond laser cataract surgery, wherein the preservation of ocular structures such as the capsular bag is critical for good visual outcomes. The preliminary reports on the intraocular use of femtosecond lasers were promising [9–11]. The first clinical report of a human eye treated by femtosecond laser cataract surgery was in Hungary in 2008 [12]. There has been increasing interest in the use of femtosecond lasers in cataract surgery after this first report. The FDA approved the usage of femtosecond lasers for cataract surgery in 2010. The application of femtosecond lasers in cataract surgery increased dramatically after the LenSx system was cleared for use by the FDA. In only a few years, femtosecond lasers have become relevant in cataract surgery in clinics as an opportunity to improve the quality of the surgical procedure. In a relatively short period of time, the LenSx femtosecond laser system has been used in more than 200,000 procedures worldwide to date [13]. Multiple commercial femtosecond lasers have been cleared for use by the U.S. FDA for cataract surgery, including use in creating corneal incision, capsulotomy and lens fragmentation. These include LenSx (Alcon Laboratories Inc.), Catalys (Abbott Medical Optics), LensAR (LensAR Inc., Orlando, Florida), Victus (Technolas Perfect Vision and Bausch & Lomb, Rochester, New York) and Femto LDV (Ziemer Ophthalmic Systems AG). They are solid-state femtosecond lasers integrated within an imaging subsystem. The main principles are the same, but they differ in versatility, docking, speed of action, etc.

#### **14.6 Optical Coherence Tomography in Ophthalmic Applications**

Optical coherence tomography (OCT) is an interferometric imaging technology that was introduced by David Huang et al. in 1991 at the MIT [14]. It is frequently referred to as ultrasound with light. It enables a much higher resolution (μm) than ultrasound while still allowing for imaging depths of several millimeters.

In order to obtain the image, the OCT light is split into two arms, the sample arm aiming at the item of interest (e.g. the patient eye), and the reference arm, e.g. a mirror. The reflected light of both arms is then brought together to form an interference pattern. Processing this pattern leads to a depth reflectivity profile, the so called A-scan. By scanning the beam across the sample, multiple A-scans can be obtained and combined into a cross section, the so called B-scan.

The first implementation of OCT was the so called time-domain OCT, in which a low coherence light source is used (Fig. 14.5a). Therefore interference only occurs as long as the path lengths of sample and reference arm are matched better than the coherence length of the light source. This coherence gating allows depth discrimination by reference arm length tuning, typically realized by a motorized mirror stage. This mechanical movement limits the A-scan rate to the lower KHz range.

In contrast to this, in Fourier-domain OCT the whole broadband interference pattern is acquired spectrally encoded. This is achieved by either a broadband light source and dispersive detector (spatially encoded Fourier-domain OCT, often called spectral-domain OCT) (Fig. 14.5b), or by tunable narrowband light source and a point detector (time encoded Fourier domain OCT, often called swept-source OCT) (Fig. 14.5c). Based on the Wiener-Khinchin theorem the

**Fig. 14.5** Different types of OCT implementations. (**a**) Time-domain OCT with broadband light source and moveable mirror for depth ranging and a photodiode for detection. (**b**) Fourier-domain setup with broadband

light source, fixed mirror and spatially encoded detection. (**c**) Swept-source OCT setup with tunable light source, fixed reference mirror and a photodiode for detection

**Fig. 14.6** Alignment Screen of the VICTUS laser system

whole A-scan can be obtained by simply taking the Fourier transform of the acquired spectra without any moving parts [15].

This allows for at least two orders of magnitude higher A-scan rates of several 100KHz, while the signal to noise ratio (SNR) is improved proportional to the number of detectors (camera pixels of the line sensor, or number of samples with swept-source OCT, both typically around 2000). Therefore Fourier-domain OCT has both a speed and sensitivity advantage of about 20 dB over time-domain OCT [16].

While retinal imaging of the human eye is the most common clinical application of OCT, anterior segment imaging becomes increasingly popular. Obviously the wavelength of the light source needs to be optimized for the application. For anterior segment applications, the 1300 nm range is favorable because of its deeper penetration of the sclera, which allows for better chamber angle visualization. Anterior segment OCT has great potential for refractive surgery applications and especially for cataract surgery (Fig. 14.6).

#### **14.7 Treatment Steps of FLACS Procedure**

#### **14.7.1 Planning**

A series of parameters, such as pupil dilation, lens thickness, and corneal thickness, will need to be measured before a cataract surgery. Then a surgical plan will be created. Normally, the planning parameters include the size, shape, and desired center of laser ablation for capsulotomy. Further planning parameters are diameter, depth, and patterns of cut for lens fragmentation. The planning will include also location, depth, and architecture of corneal incisions. Adjustments still can be performed in real-time guiding by cross-sectional imaging during cataract surgery. Consider the LenSx laser system. After the system warm up and self-checks have been completed, the opening screen will appear on the monitor (Fig. 14.7). The surgeon then presses program button to go to the patternselection screen as shown in Fig. 14.7. There, the surgeon will program the lens pattern, capsulotomy pattern, primary incision pattern, secondary incision pattern and arcuate incision pattern. Each of these patterns may be used either individually or consecutively during the same procedure.

## **14.7.2 Engagement**

The patient interface optically couples the eye to the laser delivery system to prevent eye movement. This is the first clinical step for FLACS. The docking system, namely the patient interface, is normally composed of a curved applanation lens and suction ring. Both parts are integrated into a single piece and mounted on the laser delivery system. The patient interface serves as a sterile barrier between the patient's eye and the femtosecond laser. An ideal patient interface should satisfy three requirements. First, it should fix the eye without distorting it and causing the intraocular pressure (IOP) to increase. Second, it should have a wide field of view to allow for surgical facility. Third, it should have the ability to prevent corneal folds that occur with suction, and allow for a tight laser focus. All commercially available femtosecond lasers appear to be effective in stabilizing the patient' eye; however, the methods and devices for docking are an area of differentiation for these femtosecond lasers [17]. LenSx and Victus have reported a curved applanation lens and suction system. Catalys and LensAR have reported a fluid filled suction

**Fig. 14.7** Treatment planning screen of the VICTUS laser system

ring. LensAR has reported a water bath suction fixation device [18]. Femto LDV has reported a liquid interface [19].

#### **14.7.3 Visualization and Customization**

The image guidance sub-system gives the instruction about the dimensions and location of ocular structures. It is a critical part of femtosecond laser cataract surgery because it guides the surgeon through lens fragmentation zones and the placement of incisions. This sub-system must be able to detect the iris boundaries so that the surgery can be made safely without cutting the iris. To maintain a safety zone and not cut in the posterior capsule, this sub-system must be able to detect the posterior surface of the lens. The corneal thickness should be measured in order to customize corneal incisions for each patient. The noncontact, fast method of high-resolution image acquisition with FD-OCT, which enables detailed cross-sectional imaging of the anterior segment, is useful in various clinical settings. Some important parameters for cataracts, such as iris boundaries, corneal thickness, lens position, and iridocorneal angle, can be measured by FD-OCT in real time [20, 21]. The advantages of FD-OCT, including noncontact, high resolution, accuracy in the presence of corneal opacity, and ease of use, make it the most popular sub-system for commercially available femtosecond lasers in cataract surgery. The live OCT images are used to help the doctor in the docking process. The surgeon also uses these OCT images to adjust and verify the position and orientation of the selected surgical patterns. The platforms that use FD-OCT for three-dimensional, high-resolution viewing of ocular structures in cataract surgery include LenSx, Catalys, Victus, and FemtoLDV, whereas LensAR utilizes Scheimpflug imaging technology.

#### **14.7.4 Treatment**

Treatment is the final step of femtosecond-laser cataract surgery. The femtosecond laser creates incisions through tightly focused pulses that cut eye tissue with micron-scale precision. The incision is achieved by contiguously placing the femtosecond laser focus scanned by a computercontrolled delivery system through the laser spot pattern from posterior to anterior, which will reduce the amount of radiation reaching the retina. The femtosecond laser beam creates individual photodisruption sites in a contiguous pattern to form continuous incisions. The spacing between each spot is programmed by the surgeon, who enters the size, shape, and location of the scanning pattern before the treatment. The treatment includes four parts, namely, clear corneal incisions, creation of the capsulotomy, fragmentation of the lens nucleus, and correction of astigmatism through corresponding arcuate incisions. A corneal incision consists of a series of cuts starting at the desired corneal thickness and continuing through the surface of the cornea. A capsulotomy incision consists of a cylindrical cut starting from below the surface of the anterior capsule and continuing through the capsule a few microns into the anterior chamber. A lens fragmentation incision consists of a few oriented elliptical shaped planes that intersect at the center of the lens and the maximum cutting depth should be above the posterior capsule. The surgeon controls the femtosecond laser primarily by a console interface comprised of a keyboard, touchpad, and monitor. Pattern parameters are entered at the console interface by the surgeon. A footswitch allows the surgeon to start and stop femtosecond laser treatment. The surgeon can interrupt the procedure at any time by releasing the footswitch and resume it by repressing the footswitch. The femtosecond laser will continue the treatment until the programmed pattern is complete. For the LenSx system, the lens fragmentation pattern performs phacofragmentation of the crystalline lens and may be specified as either chopped or cylindrical patterns, or both. The first pattern is usually the choice for harder lenses and the second pattern for softer ones. A chopped pattern creates intersecting radial lines at the programmed posterior depth inside the lens. When one layer is filled, the laser creates another layer a few microns above to fill in the ellipsoidal shape, by creating vertical, ellipsoidal planes that intersect at right angles. When the cylindrical pattern is used, the treatment pattern starts at the programmed posterior depth and continues to the programmed anterior depth as a series of concentric rings. When the programmed anterior depth is reached, the pattern will be completed. The patterns can also be combined to form a hybrid pattern. In this case, both the intersecting radial lines for the chopped version and a series of concentric rings for the cylindrical version will be created at the programmed posterior depth. Both patterns are burned simultaneously for each layer until the anterior depth is reached as programmed.

After the application of a femtosecond laser, a standard manual PE or Phaco will be employed to remove the broken crystalline lens. After the removal of the cataract, an IOL is usually implanted into the eye. The cataract surgery is then performed, and the patient's cloudy natural lens is removed and replaced with a synthetic lens to restore vision.

#### **14.7.5 Benefits**

The first clinical report of a human eye treated by femtosecond laser cataract surgery was in Hungary in 2008 [12]. It is a promising new technological advance, which plays an ever increasing role in cataract surgery. The FDA approved the usage of femtosecond laser for cataract surgery in 2010. The application of a femtosecond laser in cataract surgery increased dramatically after the LenSx system was cleared for use. Just in a few years, femtosecond lasers have become relevant in cataract surgery in clinic as an opportunity to improve the quality of the surgical procedure. In a relatively short period of time, the LenSx femtosecond laser system has been used in more than 200,000 procedures worldwide to date [13]. The accumulating clinical experience with FLACS indicates that it offers several potential advantages over manual surgery: a better quality of incision with reduced induced astigmatism, increased reliability and reproducibility of the capsulotomy with increased stability of the implanted lens, and a reduction in the use of ultrasound [22].

#### **14.7.5.1 Corneal Incision**

Clear corneal self-sealing incision architecture is of paramount importance [22]. Previous study shows the corneal incision with internal wound gape will increase the risk of leakage and thus postoperative endophthalmitis [23]. Manual corneal incisions are difficult to control in terms of length and architecture. Such manually created incisions may affect the stability of the wound under pressure following surgery and potentially allow leakage [24]. One of the most important advantages of FLACS is that the corneal incisions can be designed to construct reproducible and stable incisions, so that the incision width and length may be customized according to this precept with a high degree of integrity [18, 24]. Favorable results of more stable and aberration-free results from FLACS were reported in the triplanar configuration [25]. One of the benefits from FLACS is that corneal wounds and arcuate incisions in the desired position and depth make the control of postoperative astigmatism much more effective [26].

#### **14.7.5.2 Capsulotomy**

If FLACS can produce a reproducibly round, centered, and intact anterior capsule, this alone would improve the safety of cataract surgery in a way that could possibly justify the introduction of this new technology [9]. The anterior capsular tears in the hands of experienced surgeons have a high incidence [27, 28]. Capsular rupture can lead to a rise in IOP, persistent uveitis, cystoid macular edema, retinal detachment, infection, and retained soft lens matter requiring removal [9]. Femtosecond lasers made it very easy to remove the capsular button [29]. A report based on a small group of porcine eyes showed that the capsule strength was as good as or greater than a manual capsulorhexis, enabling a greater force of stretch before rupture [12]. The importance of a precisely sized capsulorhexis to optimize IOL position and performance is well known. The major source of error in IOL power calculation is the effective lens position inaccuracy [30]. FLACS can produce a more precise, reproducible, better centered, and stronger opening of the anterior capsule than conventional manual continuous curvilinear capsulorhexis (CCC) [22, 31]. FLACS can create capsulotomy that is more precise, rounder, more regularly shaped, and with greater centration than a manual technique, which permits for a better IOL and capsule overlap. The improved overlap of the anterior capsule by the IOL has been shown to produce less IOL tilt and decentration compared with manual CCC [22]. Many similar results were obtained from commercially available femtosecond lasers in cataract surgery [18].

#### **14.7.5.3 Lens Fragmentation**

Femtosecond lasers are powerful tools for segmenting the crystalline lens, making the difficult chop steps that most frequently lead to complications in regular cataract surgery much easier [32, 33]. Femtosecond laser cuts on the crystalline lens soften the harder cataracts and reduce the amount of ultrasound energy from the PE probe, thereby diminishing the risk of capsule complications and corneal endothelial injury (Fig. 14.8). A report done on a porcine eye study with the LenSx platform stated that FLACS reduced PE power by 43% and operative time by 51% [12]. Such a comparison in human eye showed a 39% average reduction in dispersed energy for PE [34]. A decrease in PE power and time using the LenSx platform was also reported by other studies. The effective PE time showed a 70% reduction in a study conducted on the Catalys platform [35]. A significant decrease in the PE power and time was reported by a comparative study conducted on the Victus platform [36]. To date, the clinical reports showed that the percentage reduction of FLACS PE varied by company and grade of cataract but was at least 33%.

#### **14.7.5.4 Other Benefits**

Other benefits of FLACS include the ability to correct astigmatism through corresponding arcuate incisions, a reduction of infection possibility, decreased endothelial cell loss, and possibly improved visual and refractive outcomes.

**Fig. 14.8** Lens fragmentation pattern of the VICTUS femtosecond laser

#### **14.7.5.5 Safety**

Multiple commercial femtosecond laser systems are approved by the FDA for refractive surgery, including use in creating corneal flaps in LASIK surgery and in cataract surgery. The laser safety is well evaluated for LASIK surgery through investigation from laboratory to clinic [37– 45]. The laser safety for FLACS also has been reported [46]. A safety assessment can provide an analysis of exposure relative to established exposure limits. ANSI Z136.1-2007 series provides internationally accepted exposure limits. According to ANSI Z136.1-2007, the photo disruption threshold for a femtosecond laser is about 1 J/cm2 . With a cataract procedure, for efficient cutting, the laser is tightly focused on the lens or cornea, which results in a transmitted beam with a diameter of approximately a few mm on the retina. Because the size of the beam on the retina is approximately a few mm in diameter (compared with the focused 2 μm spot on the lens or cornea), the fluence on the retina is much smaller than in the lens or cornea and much lower than the photo disruption threshold. In fact, the actual temperature increase in an in vivo retina is expected to be even smaller due to the presence of heat sinks (vitreous, aqueous humor) and the cooling effect of local blood flow. Additionally, in case of cataract surgery, the laser energy reaching the retina may be even lesser due to the presence of the cataract, resulting in greater safety for the retina during FLACS. Experiments on rabbits did not show retinal damage with FLACS. Safety evaluation of femtosecond lentotomy on the porcine lens also showed that there is no retinal damage at the setting, which is very similar to FLACS [47]. Initial clinical results for safety in FLACS were reported in 2010 [46]. To date, more than 200,000 clinical experiences have shown the retina safety for FLACS in a persuasive way.

#### **14.8 Clinical Experience with FLACS**

The OCT is crucial to centration on the visual axis: Femtosecond lasers for cataract surgery have been on the market for over 5 years now, but the questions for cataract surgeons are: Has the technology advanced over this period, and if so, what advances have been made? With the latest B + L VICTUS femtosecond laser platform (Fig. 14.9), a number of key features have been introduced that are considerably improving patient outcomes. The feature list is long, and cataract surgeons appreciate the VICTUS' ability to create LASIK flaps and perform different kinds of keratoplasty, but in terms of cataract surgery, there are a number of key features that are the most important ones in terms of improving patient outcomes.

The first key feature is the implementation of the new swept-source OCT system (Fig. 14.10). It has a very high resolution; it displays a live OCT image throughout the procedure and performs 50,000 A-scans per second. Surgeons view it as almost having a "filmic" quality. The system also has enhanced contrast sensitivity compared with previous instruments, and the new software offers the automatic recognition of the pupil, lens thickness and the anterior and posterior capsule. There's a long list of features: various software optimizations, not least an advanced identification management system; an improved OCT capability; soft docking, which is quite important for cataract surgery; and, perhaps most impressive of all, the new apex centration system, all of which can be seen in a surgical video, available at: https://youtu. be/-6VkDF0G7gQ (Author: Dr. Tobias Neuhann).

With all of the axes in the eye (Fig. 14.11), it can be hard to decide how to center laser capsulotomies or even more so, manual capsulorhexes. It's best to center the capsulotomy on the visual axis, and the OCT supports this by calculating 0° and 90° on the surfaces of the anterior and posterior capsules.

**Fig. 14.9** VICTUS® Femtosecond Laser Platform

**Fig. 14.10** The new swept-source OCT system: 50,000 A-scans per second enhanced contrast sensitivity, plus automatic recognition of the pupil, lens thickness and the

anterior and posterior capsule. See the OCT in action online as part of Femtosecond-Laser-Assisted cataract surgery at: youtube/-6VkDF0G7gQ

**Fig. 14.11** The optical axis of the eye, as a purely theoretical construct where the surfaces of the cornea and crystalline lens are rotationally symmetric, and their centers of curvature lie on a common line. If a point source was shined into the eye, there would be a point where all the Purkinje images coincide—the line from the point source through each Purkinje image would define the optical axis. In real eyes, the Purkinje images do not align and the surfaces are not rotationally symmetric, so no true optical axis of the eye exists. Occasionally, the optical

axis is defined as the line that minimizes the deviation of the Purkinje images; (top right) The fovea, the center of the pupil, E, and the nodal points N and N′; (bottom left) coaxially sighted corneal light reflex (CSCLR), where the line from the fixation point that is normal to the cornea defines the CSCLR; (bottom right) The pupillary axis (perpendicular to the cornea, found by aligning the first Purkinje image with the center of the pupil) and the line of sight (connecting the fixation point to the center of the entrance pupil)

Point

Where the lines cross (Fig. 14.12), the surgeon centers the capsulotomy—which enables the surgeon to find the apex of the lens. Usually, most surgeons would center the capsulotomy on the pupil center, which is easiest—but there is a noticeable difference in positioning (Fig. 14.13).

Cataract surgeons are convinced, that using the VICTUS' OCT-guided method assures that the center of the lens is optimally positioned in the capsular bag, centered on the visual axis. This is particularly important for aspheric, toric and multifocal lenses, which is greatly improved, using the VICTUS apex centration capability. The optics of IOLs is becoming increasingly sophisticated, and apex centration should now be considered a mandatory tool for sophisticated IOLs such as aspheric monofocal, multifocal

**Fig. 14.12** VICTUS' OCT enables the cataract surgeon to center the capsulotomy on the visual axis by calculating 0° and 90° on the surface of the anterior and posterior capsules. The capsulotomy is centered where the lines cross, enabling the surgeon to find the apex of the lens

or trifocal toric IOLs. The latest VICTUS system's high-resolution OCT is definitely superior to a Purkinje image and will be the 'conditio sine qua non' for the next generation of IOLs. It will enable more patients to benefit from the advantages of these IOLs, should help avoid the specter of negative dysphotopsia, especially the outer dark arc which patients often complain about and for which there is no real solution, and events like capsular phimosis and postoperative toric IOL rotation. Considering some of the most recent IOLs to come to the market with a groove in the optic edge, that "hooks" the lens in place at the anterior capsule; the benefits of the femtosecond laser rhexis approach become obvious: When this lens is implanted in a standard eye, via an apex-centered capsulotomy, phimosis can't occur, because the anterior capsule sits inside the lens, and the lens cannot rotate. Fixating the IOL on the anterior capsule means it is closer to the iris and hence will not create any negative dysphotopsia. Cataract surgeons believe that this procedure will further the next generation of IOL optics.

#### **14.9 Summary and Outlook**

The advent of all-solid-state femtosecond lasers, coupled with a computer-controlled beam delivery system, enables the new applications of high

**Fig. 14.13** Left: The difference between a capsulotomy centered on the pupil (red and yellow circles) and the apex of the lens (actual capsulotomy) as determined by OCT. Right: The same eye once the IOL is implanted. The

IOL looks decentered—but is not. The symmetry between the edge of the anterior capsule and the edge of the implant proves that an apex-centered capsulotomy is superior to a pupil-centered capsulotomy

precision femtosecond laser ablation for ophthalmology. Combining this with a precision imaging technique for anterior segment, such as an OCT system, can make it possible to accurately target tissue in a crystalline lens by a femtosecond laser. Femtosecond laser systems have successfully entered the cataract surgery market as a promising new technological advance, which plays an ever increasing role in cataract surgery, where it automates the three main surgical steps: corneal incision, capsulotomy, and lens fragmentation. In just in a few years, femtosecond lasers have become relevant in cataract surgery in clinic as an opportunity to improve the quality of the surgical procedure. In a relatively short period of time, FLACS has been used in more than 200,000 procedures worldwide to date [13]. Clinical experiences so far indicate that this new technology is promising for the field of cataract surgery. FLACS appears safe and efficacious, and may eventually be proven superior to conventional cataract surgery.

#### **References**


technique in cataract surgery. J Cataract Refract Surg. 2013; https://doi.org/10.1016/j.jcrs.2013.05.035.


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

**15**

# **Refractive Index Shaping: In Vivo Optimization of an Implanted Intraocular Lens (IOL)**

Ruth Sahler and Josef F. Bille

#### **15.1 Introduction**

Patient satisfaction is a constant pursuit in cataract surgery. In order to enhance the chance of each patient's postoperative satisfaction, cataract surgeons measure the refraction of the eye preoperatively and attempt to select the appropriate IOL for the patient, based on not only those measurements but also on the patient's needs and expectations. The overall number of implanted premium IOLs is still small compared to the overall number of implanted IOLs. Some surgeons avoid implanting premium IOLs because of their cost, possible limitations or side effects and therefore the possibility of an up unhappy patient [1].

A multifocal or toric IOL is more sensitive to decentration or tilt compared to a standard IOL. Postoperatively the IOL will settle in place and during this process the lens can still move. Current adjustable technologies only allow the adjustment of an implanted IOL if that IOL model was selected prior to cataract surgery and therefore prior to possible complications. After the first, desired adjustment is finalized the IOL is locked in and the IOL is no longer adjustable [2]. For standard, hydrophobic and hydrophilic IOLs

R. Sahler (\*)

Perfect Lens LLC, Irvine, CA, USA

J. F. Bille University of Heidelberg, Heidelberg, Germany the options for an undesired refractive outcome are ranging from spectacles, refractive surgery to lens explantation.

Studies suggest that a significant number of patients will require spectacle prescriptions after cataract surgery. For example, a clinical study found that 37.8% of cataract patients had preoperative astigmatism of more than 1.00 D [3]. Furthermore, it was reported that postoperative astigmatism of greater than 0.75 D has an adverse effect on the performance of a monofocal IOL [4]. Further, about 25.7% of patients who undergo conventional phacoemulsification and about 28% who undergo laser-assisted cataract surgery have a postoperative spherical error of more than 0.50 D, which is enough to adversely affect their distance vision [5].

Additionally, cataract surgery is generally performed in the elderly population, so most patients who do not choose multifocal IOL implantation will require reading correction postoperatively. Market Scope estimates that more than 90% of post-cataract patients are presbyopic. Taken together, all of these factors indicate that more than 50% of patients would benefit from a distance correction after cataract surgery, and another 40% might take advantage of multifocal optics.

Unfortunately, current premium IOLs cannot reliably solve these problems because there is a possibility for the IOL to move postoperatively. Further, the effects of wound healing are difficult to predict and add an additional complication.

The refractive index shaping (RIS) technology in development by Perfect Lens, which can theoretically alter an IOL after it has been implanted and has settled in the eye. Preclinical studies have shown that a short (<30 s) in-office procedure can adjust acrylic IOL materials such that spherical, toric, and multifocal issues can be resolved permanently. The use of the femtosecond laser to create refractive index change in various materials has been studied for years. Ohmachi and Igo [6] showed a refractive index change of 0.056 in glass using a femtosecond laser. Ding et al. [7] used a femtosecond laser to obtain a refractive index change of up to 0.06 in hydrogel polymers.

Different theories regarding femtosecond laser material interactions which affect the refractive index change have been presented. The Rochester Group hypothesized that the light from the femtosecond laser induced crosslinking within a hydrophilic material and thus created an increase in the refractive index [8]. Takeshima et al. [9] believed the refractive index change in glass was caused by local heat effects from phase separation, while Katayama and Horiike [10] proposed that all changes resulted from either: (1) crosslinking, (2) phase separation, or (3) decomposition.

Recently a new process was discovered wherein existing molecules within a polymeric material become hydrophilic inside an intraocular lens (IOL) [11]. This change in hydrophilicity occurs when the polymeric material is immersed in an aqueous medium, while it is exposed to femtosecond laser radiation. The aqueous medium and the femtosecond laser radiation provide the chemical basis for the hydrophilicitybased refractive index change. After the exposure of the polymeric material to femtosecond laser radiation, water slowly diffuses to the sites with increased hydrophilicity forming hydrogen bonds, typically over a 24–72 h period of time, to create a refractive index change in the polymeric material.

#### **15.2 Technology Background**

#### **15.2.1 Femtosecond Laser-Induced Refractive Index Change (RIS)**

A new method for modifying the refractive index of polymeric materials has been developed, called Refractive Index Shaping (RIS) (Fig. 15.1a) [11].

**Fig. 15.1** (**a**) Refractive Index Shaping (RIS), Femtosecond (FS) laser, refractive index of IOL (n1) and refractive index of RIS lens (n2). (**b**) Phase Wrapping. (**c**) Multifocal IOL to

Monofocal, before (left) and after (right) RIS-modification. (**d**) Hydrophilicity based Δn change (adapted with permission from Ref. 12, The Optical Society)

High repetition rate femtosecond laser pulses are directed to a designated area to create a "lens" inside an IOL. RIS changes the refractive characteristics of the polymeric material without cutting the material. The RIS-lens is a gradient lens, with the related refractive index change generated by the instantaneous energy of the laser pulse, which is regulated by an acousto-optical modulator (AOM) at approx. 1 MHz speed. The physical parameters of the procedure, like scan speed, wavelength, pulse rate, energy per pulse, etc., are provided in [13], as well as data on the homogeneity of refractive index change. In preparation of a RIS lens, the femtosecond laser is directed to a small area within the polymeric IOL. The laser light has several effects on the acrylic material: (1) the most recognized is that the laser light heats the material and causes a change in the material as a result of the heat, and (2) if the proper wavelength is utilized, the exposure to the laser light will alter the polarity of certain molecules within the polymeric material and change the hydrophilicity of the polymeric material. The change in hydrophilicity drives a large, repeatable and homogeneous change in refractive characteristics, which does not depend on the accumulation of heat and therefore can be used with a fast scan speed, allowing for in vivo application.

#### **15.2.2 Phase Wrapping**

In a traditional convex lens, one would be limited to a height of 200 μm (central slab area) in order to adjust the optical power of the IOL. The power for a 6 mm lens with a height of 200 μm would be 0.44 dpt (Δn = 0.01). Phase wrapping is a process which is used to create a RIS "lens" with enhanced diopter change, without increasing the height of the "lens". Thus, a convex lens is reduced to a thin layer of approx. 50 μm thicknesses, creating multiple refractive zones. The different phase levels are created by controlling the energy per pulse and focal spot. For a "lens" with a diameter of 6 mm, one zone corresponds to 0.1 diopter (Fig. 15.1b).

#### **15.2.3 Example of RIS-Procedure: Change of Diffractive Hydrophilic IOL into a Monofocal IOL**

The possibility of changing a diffractive multifocal IOL into a monofocal IOL was evaluated. A suitable lens design was created to match the diffractive power and energy split of the diffractive multifocal IOL, as depicted in Fig. 15.1c. The original IOL measured 20.85 D with an add power of 3.58 D and a modulation transfer function (MTF) of 0.37 and 0.26. After RIS shaping, the IOL measured as a monofocal IOL at 21.04 D with an MTF of 0.57. The IOL shown in Fig. 15.1c, before (left), was a commercial diffractive multifocal IOL. The RIS-process was imposed to change the lens from multifocal to monofocal. The inverse process, i.e. creation of multifocality in a monofocal hydrophobic IOL is shown also in [14]. A RIS-lens can be 'erased', sub-sequentially, by e.g. creating a RIS-lens with opposite refraction in an adjacent layer.

#### **15.2.4 Hydrophilicity-Based Δn Change**

To demonstrate that the hydrophilicity of the polymeric material has been changed, two areas of polymeric material were compared. One area of the material had not been treated, and the adjacent area was treated with the femtosecond laser. To test whether the treatment created a hydrophilic area, the wetting angle measurement technique was employed [15]. The treated and untreated sections of an acrylic hydrophobic material were exposed to a drop of water. Figure 15.1d (left) shows the drop of water on a treated area, while Fig. 15.1d (right) displays a water drop placed on an untreated area of the lens. The angle of the drop, on top of the treated material, in Fig. 15.1d (left), is ~64°, which indicates that it is in contact with a hydrophilic surface. The angle of the drop, on top of the untreated material, in Fig. 15.1d (right), is ~87°, which indicates that the drop is in contact with a hydrophobic surface. The change in hydrophilicity demonstrates that the treatment with a femtosecond laser created a hydrophilic area.

#### **15.3 Microscope Study: Methods and Materials**

Two different microscope setups have been used for the study: Laser Induced Fluorescence (LIF) microscopy (Sect. 15.3.1) [16] and Raman microscopy (Sect. 15.3.2) [17]. Various hydrophilic and hydrophobic intraocular lens materials were studied (Sect. 15.3.3). Each microscope is being used to identify exactly what molecular changes occur upon exposure of the polymeric material to the light of the femtosecond laser.

#### **15.3.1 Laser-Induced Fluorescence (LIF) Microscopy, STED Contrast**

The STED (Stimulated Emission Depletion) microscope uses a low power pulsed supercontinuum laser source (WhiteLase SC450-PP-HE, Fianium, Southampton, UK) for excitation at virtually any optical wavelength. After removal of the IR part of the supercontinuum spectrum using a 760 nm short pass filter, the desired excitation wavelength is selected using an acousto-optical tunable filter (AOTF, PCAOM-VIS, Crystal Technologies, Palo Alto, USA). The beam passes the AOTF three times in order to suppress the undesired wavelength range of the supercontinuum spectrum; the triple pass suppresses 1000 times better than a regular single pass. The STED laser is a frequency-doubled pulsed fiber laser (Katana-08 HPKA/40/07750/600/1600/FS) providing 600 ps pulses of up to 40 nJ pulse energy at a wavelength of 775 nm. The STED laser can be triggered electronically over a wide frequency range (25/40 MHz) which greatly simplifies the synchronization of the excitation and STED pulses. The STED laser is triggered by the pulsed supercontinuum laser operating at 38.6 MHz.

#### **15.3.2 Raman Microscopy**

Raman spectra were recorded on a commercial HORIBA XploRA PLUS Raman Microscope (HORIBA Jobin Yvon GmbH, Bensheim, Germany). All spectra were measured with a 10× objective with a 600 g/mm grating. The wavelength of the continuous wave excitation laser source was 785 nm (with a laser output of approximately 100 mW). Raman spectra were acquired both in the fingerprint (200–1800 cm−<sup>1</sup> ) and highwavenumber (2400–3800 cm−<sup>1</sup> ) regions.

#### **15.3.3 Materials**

The microscopic study was performed on three different IOL materials. The following samples were studied: (1) hydrophilic acrylic material without yellow dye [18]: (1a) Hydrophilic acrylic intraocular lens (see e.g. Sect. 15.4.2.1) and (1b) Hydrophilic acrylic strip, cut from a hydrophilic acrylic button (see e.g. Sect. 15.4.2.2); (2) hydrophobic acrylic strip with yellow dye (blue blocking), cut from a hydrophobic acrylic button [19], and (3) hydrophobic acrylic strip without yellow dye, cut from a hydrophobic acrylic button [20]. The hydrophilic acrylic intraocular lens had a refractive power of five diopters, the strips were cubes of approximately 10 mm × 2 mm × 2 mm dimension and exhibited no refractive power. All strips are made from buttons, with the material specified in [18–20]. The acrylic buttons were disc shaped, 10 mm in diameter and 2 mm thick.

The chosen microscopic techniques provide information on the chemical nature of the process, on the electronic (fluorescence) as well as the molecular (Raman) level. CARS-microscopy is sensitive to refractive index changes, due to the four-wave mixing feature. In case of the clear hydrophilic acrylic material, LIF microscopy, STED microscopy and Raman microscopy were applied. The yellow hydrophobic material as well as the clear hydrophobic material was studied with LIF microscopy and STED microscopy.

#### **15.4 Chemical Basis for RIS**

#### **15.4.1 Enhancement of Hydrophilicity by Femtosecond Laser Excitation**

In Fig. 15.2, the photo-induced hydrolysis of polymeric material in aqueous media is presented, in which two hydrophilic functional groups, acid group and alcohol group, are produced [12]. This result is similar to the results found in previous research on surface treatment of PMMA, with femtosecond laser two-photon excitation [21], and excimer laser UV excitation [22].

Another possible mechanism for enhancement of hydrophilicity is two-photon depolymerization [23]. Zhou et al. [24] use a random copolymer of tetrahydropyranyl metacrylate (THPMA) and methyl methacrylate (MMA) polymer doped with BSB-S2 as the UV photoacid generators (PAG) for microfabrication. At the laser focal spot, the THPMA groups were converted to carboxylic acid groups due to photo-generated acid-induced ester cleavage reactions, and were therefore rendered soluble in aqueous base developer. This process may

**Fig. 15.2** Photo-induced hydrolysis (adapted with permission from Ref. 12, The Optical Society)

essentially contribute to the increase of hydrophilicity in laser treated areas in hydrophobic lens materials.

#### **15.4.2 Femtosecond Laser Excited Fluorescence in a Hydrophilic Intraocular Lens**

#### **15.4.2.1 Section of a Hydrophilic Intraocular Lens**

The schematic sketch of the hydrophilic intraocular lens of five diopters is shown in Fig. 15.3a. The material is a clear material and measures 6 mm in diameter, and the treated area is within a 4 mm circle in the center of the lens. As shown in Fig. 15.3b, the newly formed hydrophilic molecules in the laser-treated area can be imaged by Laser Induced Fluorescence ("LIF") microscopy, visualizing the phasewrapped RIS-lens by green fluorescent light emission, with blue excitation and wide field illumination (10× objective). Different shades of green correspond to different amounts of fluorescence light, indicating different amounts of newly formed hydrophilic polar molecules.

**Fig. 15.3** (**a**) Schematic sketch of hydrophilic acrylic lens (five diopters), RIS-treated area 4 mm circle in the center of the intraocular lens. (**b**) Fluorescence image of a

RIS-lens, inscribed in the hydrophilic acrylic lens, sketched in this figure (a) (adapted with permission from Ref. 12, The Optical Society)

The fluorescence image reflects the homogeneity and repeatability of refractive index change in the laser treated areas.

#### **15.4.2.2 Fluorescent Light, Originating From Newly Created Fluorophores (Simultaneous Scans)**

In Fig. 15.4, the simultaneous scanning of a laser excited area with light of two different wavelengths, e.g. 600 nm (Left image, fluorescence detection at 628 nm) and 650 nm (Right image, fluorescence detection at 708 nm) is depicted, demonstrating the detection of spatially distributed fluorophores in "On/Off" states. When the fluorophore has exposure to light of the correct wavelength it absorbs energy and creates fluorescent light. This so-called "Blinking" indicates the presence of single fluorophores, with active or silent behavior. In the upper middle part, the two instantaneous images are overlaid, labeling the left image in red color and the right image in green color. Note the scale bar of 1 μm, demonstrating submicron resolution of the images. The regions imaged in Fig. 15.4 are only approx. 10 μm in size, and are selected in fully treated areas, resulting in homogeneous appearances.

#### **15.4.2.3 Femtosecond Laser Excited Fluorescence in a Hydrophobic Intraocular Lens**

In Fig.15.5, various RIS lenses, written in clear hydrophobic lens material [20] are imaged with fluorescence microscopy (Cylindrical RIS lens (Fig. 15.5a), Spherical RIS lens (Fig. 15.5b), Sperocylindrical RIS lens (Fig. 15.5c).

In Fig. 15.6a and b, transmission (top) and fluorescence (bottom) images of a hydrophobic strip are depicted [20]. A RIS lens was patterned (Fig. 15.6a and b, arrows) in the center of the hydrophobic strip.

In Fig. 15.6c, fluorescence spectra from the RIS-pattern of clear hydrophobic mate-

**Fig. 15.4** Simultaneous scans at 600 and 650 nm. Left image—fluorescence detection at 628 nm, right image fluorescence detection at 708 nm. In the upper middle part, two instantaneous images were overlaid, labeling the left image in red color and right image in green color. The

imaged regions were approx. 10 μm in size, and were selected in fully treated areas, resulting in homogeneous appearances (adapted with permission from Ref. 12, The Optical Society)

**Fig. 15.5** Fluorescence images of hydrophobic, RIS lenses, (a) cylindrical, (b) spherical and (c) spherocylindrical RIS lens

rial [20] are shown, with excitation/emission at 405/500 nm, and 488/535 nm, respectively. The spectra closely resemble the spectra of the RIS pattern of yellow hydrophobic material [19], as well as the spectra from the hydrophilic material [18], reaffirming the fact that similar fluorescent molecules are generated in hydrophilic and hydrophobic materials.

Figure 15.6d (left) displays simultaneous xz-scans at three excitation wavelengths (exc 470 nm, em 525/50 nm (upper left); exc 605 nm, em 628/32 nm (upper right); exc 650 nm, em 708/75 nm (lower left)). The bright spot marks the surface of the clear hydrophobic material.

The fluorescence appeared strongest at 605 nm excitation while it was very weak at blue light excitation. Inside the bulk material the intensity drops after a few microns. This is probably caused by a mismatch of the refractive index between the immersion oil and the bulk material. The lower narrow line marks the coverslip glass surface on top of which the sample was mounted.

The clear hydrophobic material was imaged at two fluorescence bands simultaneously (see Fig. 15.6d (right)) (exc 605 nm, em 628/32 nm (upper left) and exc 650 nm, em 708/75 nm (upper right). The fluorescence emissions appear homogeneous in both wavelength bands at a diffraction limited resolution level of 230 nm. The regions imaged in Fig. 15.6d are only approx. 10 μm in size, and are selected in fully treated areas, resulting in homogeneous appearances.

The fluorescent molecules in the clear hydrophobic material bleach [20], i.e. photo-convert into a non-fluorescent species, upon excitation, similar to common organic fluorescent molecules. Figure 15.6e (left) shows a darker square region in the center, which was previously scanned several times. The regions imaged in Fig. 15.6d and e are only approx. 10 μm in size, and are selected in fully treated areas, resulting in homogeneous appearances.

The fluorescent species in the clear hydrophobic material can be stimulated from the excited to the ground state similar to common organic fluorescent molecules. Figure 15.6e (right) shows the fluorescence intensity measured in a STED microscope (see [16]), with excitation laser and STED laser simultaneously switched on. The brighter band shows a region where the STED laser was temporarily switched off. No finer structures could be found with STED imaging contrast. The noise is shot noise from a photon count per pixel of 17 in the bright region and a count of 6 in the regions where the STED laser was on.

**Fig. 15.6** (**a**) Hydrophobic clear strip (birdview): transmission image (top), fluorescence image (bottom) and the RIS patterns indicated by arrows. (**b**) Hydrophobic clear strip (sideview): transmission image (top), fluorescence image (bottom). (**c**) Fluorescence spectra, excitation at 405 nm and emission max. at 500 nm (top), excitation at 488 nm and emission max. at 535 nm (bottom) (sample: Clear hydrophobic strip [20]). (**d**) Left: Magnified a few μm sized confocal xzslice (side view) across a bright part of the Fresnel pattern. Right: Magnified confocal xy-slice (top view, at the samples surface) at a bright part of the Fresnel pattern. The fluorescence images were taken simultaneously at 470 nm, resp. 605 nm, resp. 650 nm excitation. (**e**) High resolution fluorescence xy-images (top view) of clear hydrophobic strip. Left: The darker squared field shows an area which was previously scanned and gradually bleached. Right: The bright band indicates an area where the STED beam was switched off temporarily while the full image was scanned. Thus, the newly created fluorophores show analogous behavior (bleaching and stimulated emission) like regular fluorescent dyes (adapted with permission from Ref. 12, The Optical Society)

#### **15.4.2.4 Identification of Fluorescent Molecules as Benzenamines**

The excitation/emission spectra of a laser excited area are plotted in a three dimensional graph, with the excitation wavelengths on the abscissa and the emission wavelengths on the ordinate (see Fig. 15.7a). The z-axis is depicting the intensity of the fluorescence light, emitted by the fluorophores. The fluorescence excitation and emission scan was done with a TCS SP8 X system (Leica Microsystems, Mannheim). Data analysis and the graphs were generated using the Leica confocal software LASX. The microscope was equipped with a white light laser. The highest fluorescence light emission was generated at a wavelength of 470 nm of the white light laser. The corresponding emission spectrum extends over a broad spectral region, from 500 to 650 nm, indicating the formation of hydrophilic polar molecules. This graph demonstrates the sensitivity of the polymer molecules to laser light excitation.

With an excitation wavelength of 472 nm, the emission spectrum of the fluorophore is centered at 527 nm, as depicted in the lower left of Fig. 15.7b (TCS SP8 X (Leica Microsystems GmbH)). In the upper left of Fig. 15.7b, a typical excitation/ emission spectrum of an aromatic carboxylic acid Rhodamine Green Carboxylic Acid is plotted for comparison, with excitation at 480 nm and emission centered at 525 nm. Thus, the spectral signature of the femtosecond laser generated polar molecule is similar to the characteristics of an aromatic carboxylic acid. Based on the chemical composition of the acrylic material with UV-dopant copolymer, the spectral signature of the femtosecond laser generated polar molecules points to the class of benzenamines, like N-phenyl-4-(phenylazo) benzenamine (C18H15N3). For comparison, the excitation/emission spectra of a pure acrylic material, e.g. PMMA, are shown on the lower right side, which are positioned in the deep UV, indicating that the UV-absorber molecules, which get excited by two-photon absorption, are essential to initiate the observed molecular changes.

#### **15.4.2.5 Raman Spectra of Hydrophilic Material**

In Fig. 15.8, Raman spectra are depicted which were recorded at three different positions of the hydrophilic material: Left (RIS-pattern, blue), Right (RIS-pattern, red), Center (Untreated area, black). The high wavenumber (2400– 3800 cm−<sup>1</sup> ) region of the Raman spectra shown in Fig. 15.8a is dominated by two features. The sharp feature in the region 2800–3000 cm−<sup>1</sup> , which is composed of three distinct vibrational bands, can be assigned to stretching vibrations of CH, and CH2 functional groups [25]. The relatively broad feature ranging from 3100 cm−<sup>1</sup>

**Fig. 15.7** (**a**) Excitation/Emission Spectra of fluorescent molecule. (**b**) Identification of fluorescent molecule (adapted with permission from Ref. 12, The Optical Society)

0 2400 2600 2800 3200 O-H Counts CH,CH2 3000 3400 3600 3800 10000 20000 30000 40000 50000 60000 **a b** <sup>60000</sup> Left Right Center

of the respective Raman spectra, which were shifted vertically for the sake of clarity (adapted with permission from

Ref. 12, The Optical Society)

**Fig. 15.8** Raman spectra of a hydrophilic material: (**a**) High-frequency part, (**b**) Low-frequency part. Dashed dotted horizontal lines represent the zero signal base lines

up to ca. 3600 cm−<sup>1</sup> with a frequency maximum around 3300 cm−<sup>1</sup> is characteristic for stretching vibrations of hydrogen bonded OH groups of water molecules in the hydrophilic polymer material [26]. The assignments of several distinct spectral features in the fingerprint region (200–1800 cm−<sup>1</sup> ), which are assigned in the Raman spectra of Fig. 15.8b, indicate that the base material of the hydrophilic strip largely resembles the molecular structure of a poly-2-hydroxyethylmethacrylate (PHEMA) polymer [25, 27]. In the latter case the capability for the high water uptake of the material can be attributed to the presence of OH groups along the flexible polymer backbone, which can form primary hydrogen bonds with water molecules.

As can be seen in Fig. 15.8a the overall OH band intensity is significantly diminished in the Raman spectra measured in the lasertreated areas (Left and Right) as compared to the untreated area (Center) of the strip. This is consistent with consumption of H2O molecules in the laser-treated areas due to the photo-induced hydrolysis reaction shown in Fig. 15.2. Furthermore, the reduction of the OH band intensity in the laser-treated region is paralleled by a significant increase of the CH and CH2 stretching vibration band intensities, which further indicates reaction of the polymer material upon femtosecond laser treatment. The latter fact is confirmed by the observed significant change of the low frequency range Raman spectra (Fig. 15.8b) upon laser treatment. The Raman spectra taken within the treated area (Right, Left in Fig. 15.8b) exhibit a noticeable contribution of background fluorescence light in the low frequency region (200–2500 cm−<sup>1</sup> ), due to excitation/emission processes of newly created fluorophores. In contrast, there is almost no fluorescence background in the untreated area (Center in Fig. 15.8b), demonstrating, that fluorophores are solely generated by the irradiation with the femtosecond laser. Considering the possible presence of UV-blocker/stabilizers in the polymer material (such as e.g. benzotriazole derivatives [28, 29]) the newly created fluorescent molecules might be phenazine derivatives, which could be formed by reaction sequence initiated by the femtosecond twophoton laser induced photochemical activation of the benzotriazole copolymer derivatives. Again these molecules remain in their existing place and are modified by the exposure to the laser light. Furthermore, a new molecular vibration in the region 1600–1620 cm−<sup>1</sup> that is observed in the laser-treated area (Fig. 15.8b, Left) which can be assigned to an aryl carboxylic acid COOH moiety [30]. This entity is a residual of the original reaction initiated by the laser light. The laser generated fluorophores could be phenazine-1-carboxylic acid molecules (see Table 15.1).


**Table 15.1** Spectral band assignments

#### **15.5 In Vivo Lens Shaping Proof of Concept**

#### **15.5.1 Concept and Repeatability**

In Fig. 15.9, the original proof of concept for a two diopter RIS lens within an IOL is depicted, with a starting diopter of 5.05D. The creation of the RIS lens altered the overall lens diopter to 2.91D. The pre-lens MTF was 0.53 for 100 lp/ mm, the post-lens MTF was 0.40 for 100 lp/mm. The shaping algorithm was further improved

**Fig. 15.9** Creation of a −2D RIS change inside one IOL. Diopter readings and MTF before (**a**) and after (**b**) RIS treatment

**Fig. 15.10** Creation of a −2 D and +2 D RIS change inside one IOL. Modulation map and diopter power map readings before (**a**) and after (**b**) RIS treatment

since then to keep the final MTF on a minimum of 0.43 for spherical changes.

In Fig. 15.10, one of the original proofs of concept lenses is displayed. The top shows the original modulation map and the bottom the diopter power map measured using the Nimo from Lambda X. The original IOL measured 5 D and the outside area was treated to have a +2 D change while the inside area had a −2 D RIS change resulting in a refractive multifocal IOL [31]. The shaping algorithm was further improved since to allow for a more precise shaping process, higher diopters and also diffractive multifocal lens shaping.

The consistency and precision of the power changes induced by the laser have been shown to be within 0.1 D of the targeted change without a significant reduction in the MTF. As shown in Fig. 15.11, the same −2.0 D refractive index shaping lens was shaped into nine IOLs to assess the repeatability of the process [13]. Figure 15.11 shows the diopter change of the IOL after the shaping process.

#### **15.5.2 Adjustment of Sphere**

In Fig. 15.12, the creation of a refractive +4 D RIS lens is depicted. The original IOL measured 16.59 D with an MTF of 0.5 for 100 lp/mm, after RIS the IOL measured 20.59 D with an MTF of 0.49 lp/mm [32]. Thus, the RIS technology can be used to change an existing IOL diopter of up to 4 D while keeping a good MTF.

**Fig. 15.12** Diopter readings and MTF before (**a**) and after (**b**) RIS treatment

#### **15.5.3 Conversion from Monofocal to a Toric IOL**

The RIS procedure is especially beneficial when it comes to the creation of toricity, the lens has already settled and the toric adjustment will therefore be centered and the axis is fixed. Figure 15.13 shows the creation of a toric lens, the original monofocal IOL measures 22 D and after RIS a 3 D astigmatism correction in one axis can be measured [33].

#### **15.5.4 Conversion from Monofocal to Multifocal**

In Fig. 15.14, the creation of multifocality in a monofocal hydrophobic IOL, is shown. Before treatment, the IOL power was 25.82 D, with an MTF of 0.54 for 100 lp/mm. After treatment, the IOL measures two foci, the original lens diopter and an additional 3.1 D add with a 62/38 split. Thus, the RIS technology can be used to add multifocality to a monofocal IOL.

#### **15.6 Biocompatibility of Intraocular Lens Power Adjustment**

An in vivo study on rabbit eyes confirmed that postoperative outcomes in terms of uveal and capsular biocompatibility were similar for treated lenses and untreated lenses. The laser power adjustment procedure did not induce inflammatory reactions in the eye or damage to the IOL optic.

Overall, all implantation procedures were uneventful and the IOLs could be fully injected

**Fig. 15.13** Converting monofocal IOL into a toric IOL; schematic view (**a**), before and after RIS (**b**)


**Fig. 15.14** Conversion of a monofocal IOL to multifocal IOL, before (**a**) and after (**b**) RIS

within the capsular bag. At the 1-week examination, nearly all operated eyes had a mild inflammatory reaction with fibrin in front of the lens or at the level of the capsulorhexis edge. Fibrin formation had completely resolved by the 2-week examination, when a mild amount of PCO started to be observed in nearly all eyes. Most eyes at this time point also had proliferative lens cortical material or pearl formation in front of the IOL.

All laser power adjustment procedures were also uneventful, and the duration of the laser treatment per se was fast (23 s). Under slit lamp examination, the phase-wrapped structure created by the laser could be observed within the optic substance of all treated IOLs. The examination also showed the formation of gas bubbles between the posterior surface of the IOL and the posterior capsule, which disappeared within 5 h. Other observations included mild corneal edema and conjunctival infection, which could be related to the eye remaining open during the alignment step of the procedure. No aqueous flare, cells, iris hyperemia, or fibrin formations were observed at any of the post-laser slit lamp examinations, and the process did not create glistening's in the IOLs [34, 35].

The consistency and precision of the power changes induced by the laser have been shown in vitro. Another recent study [13] found that the refractive-index change altered the dioptric power of commercially available yellow hydrophobic acrylic IOLs to within ±0.1 D of the targeted change without a significant reduction in the MTF. A more recent study performed in our laboratory also showed the consistency and precision of the power change by this technology in commercially available hydrophobic acrylic lenses with and without a blue-light filter, without inducing significant changes in IOL light transmission.

Our current in vivo study confirmed that postoperative outcomes in terms of uveal and capsular biocompatibility were similar between treated lenses and untreated lenses, as shown during clinical examination and by complete histopathology. The laser power adjustment procedure did not induce inflammatory reactions in the eye or damage to the IOL optic. Alignment of the rabbit eye under the laser system for the adjustment procedure was challenging because it was necessary to anesthetize the animal, which would not be the case in a clinical situation. Even though an eye interface had to be specially designed for this study, which was also the first performed in vivo, the change in power obtained was consistent in the group of treated eyes. It is noteworthy that power measurements of the IOLs were not performed before implantation in the rabbit eyes to avoid compromising the sterility of the IOLs because the main objective of the current study was to evaluate biocompatibility after laser treatment. Therefore, the method used to estimate the changes in power after laser treatment was based on measurements done with the power and MTF device after IOL explantation (Table 15.2).

The most likely cause of postoperative refractive errors after IOL implantation is incorrect



*IOL* intraocular lens

IOL calculation resulting from incorrect measurements of the eye [36]. Also, current standards regarding IOL power labeling allow a tolerance of ±0.30 D for IOLs of 0.00 D to 15.00 D or less. The tolerance increases to ±0.40 D for IOLs with a power from greater than 15.00 D to 25.00 D or less, which means that an IOL of 22.61 D and another of 23.39 D could be labeled with a dioptric power of 23.00 D or the IOL of 23.39 D could be labeled as both 23.0 D and 23.5 D [37]. All these factors make postoperative IOL adjustment technologies particularly interesting.

#### **15.7 Discussion and Conclusion**

The RIS treatment (see e.g. Fig. 15.1a) uses a femtosecond laser to change the hydrophilicity of the targeted area, which allows for a change in the refractive index. This effect in combination with a two dimensional scan pattern allows for the creation of a refractive or diffractive lens inside the material.

A photochemical process was identified, wherein hydrophilic polar functional groups are generated by photo-induced hydrolysis of polymeric material, in areas which are exposed to a femtosecond laser, thus providing the chemical basis for a hydrophilicity based refractive index change, facilitating the creation of a RIS-lens. The newly formed functional groups, e.g. amines and carboxylic acids, are strongly hydrophilic. The molecules are monomers or dimers, embedded in the original polymer and the UV-absorber co-polymer. These molecules remain in their existing place and are modified by the exposure to the laser light. In three different polymeric materials, fluorophores with identical spectral signatures were detected. Thus, photo-induced hydrolysis results in rearrangements of chemical bonds, essentially within the UV-absorber molecule, preserving the integrity of the polymeric material. Based on fluorescence microscopy, STED microscopy and Raman microscopy, no leachables are generated. Also, standard leachable tests have been performed on RIS-modified IOLs, and no leachables were found.

The results of the first in vivo study evaluating the biocompatibility of this new application of the femtosecond laser are reported. Refractive Index Shaping (RIS) can be applied to any commercially available hydrophobic or hydrophilic acrylic IOL because the process does not depend on a special IOL material. Power adjustment is noninvasive and fast and can be performed under topical anesthesia. The dioptric power of the IOL can be increased or decreased to account for surgical errors, IOL tilt, IOL decentration, or a change in the physical characteristics of the eye. Multiple adjustments to the same IOL can be performed because each adjustment only changes a very thin layer within the IOL optic substance. Premium functions can be added to the IOL and removed later, if necessary. An added multifocal pattern can, for example, be canceled by application of a pattern with opposite characteristics. The use of special protective spectacles is not necessary after treatment and the process works with standard hydrophilic and hydrophobic available intraocular lenses.

Refractive Index Shaping (RIS) is an exciting technology with the ability to precisely change the power of an intraocular lens. The RIS process is not based on a lens but on a device, which is currently not yet approved. This technology has the potential to change the course of ophthalmic cataract surgery and lens accuracy in the future. It is hopeful that this technology will allow a minimally invasive treatment for the management of refractive surprises after cataract surgery. It is exciting to imagine treatments to improve residual refractive errors will minimally invasive office procedure. To remove the surgical risks and move the treatment from the operating room to an in-office procedure.

In conclusion, postoperative lens customization utilizes femtosecond laser technology to adjust the power of an implanted IOL. A minimally invasive laser treatment provides a customized vision correction of a patient, who has had previous cataract surgery, to optimize the patient's vision. This new technology gives the surgeon an additional opportunity to improve a patient's sight. It separates the customization of the lens from the original cataract surgery giving both the patient and the doctor time to discuss and consider this treatment.

#### **References**


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

**Part IV**

**Adaptive Optics in Vision Science and Ophthalmology**

**16**

# **The Development of Adaptive Optics and Its Application in Ophthalmology**

### Gopal Swamy Jayabalan and Josef F. Bille

#### **16.1 Introduction**

Wavefront technology was originally developed nearly 50 years ago for astronomical applications. It was used to measure wavefront distortions that occurred when light traveling through the atmosphere entered an optical telescope. By applying adaptive optical closed-loop controls the speckle patterns of the star images could be improved towards diffraction-limited performance. Most of the technology was developed in association with research towards anti-missile defense systems in the late nineteen-hundred seventies. The same technology can be used to correct the ocular imperfections.

Historically, refractive errors of the human eye were corrected by glasses, contact lenses and recently with excimer laser surgery. However, such corrections were limited to the compensation of only the lower order aberrations such as myopia, hyperopia, or regular astigmatism. Indeed the optical system of the human eye as a genuine optical system generates more complex

G. S. Jayabalan Heidelberg Engineering GmbH, Heidelberg, Germany

University of Heidelberg, Heidelberg, Germany

J. F. Bille (\*) University of Heidelberg, Heidelberg, Germany distortions to the retinal images, the so-called higher-order aberrations. These aberrations are unique for the very particular eye of the patient. Under daylight vision conditions the pupil of the human eye is small, e.g. 2–3 mm diameter, so that the light travels essentially along the optical axis of the human eye (Fig. 16.1, left image). Under these conditions higher-order aberrations are limited so that a sharp retinal image is formed. Under twilight vision conditions, the pupil of the human eye dilates to approx. 5–7 mm diameter resulting in increased importance of higher-order aberrations (Fig. 16.1, right image). These higher-order aberrations result in considerable distortions to the retinal image as a considerable part of the light is transmitted through marginal areas of the human eye, away from the optical axis. These image distortions impair the visual acuity even in patients with normal vision (20/20 visual acuity) considerably.

These complex distortions can be assessed with the wavefront technology. In recent years different wavefront sensors based on several principles have been developed, and the most important ones being Tscherning ray tracing and Shack-Hartmann sensors. More recently, the application of wavefront sensing for preoperative evaluation of refractive surgical procedures has been proposed. Adaptive optical closed loop systems can be used to subjectively measure and compensate the higher-order optical

**Fig. 16.1** Left: Daylight vision, Right: Twilight vision

aberrations of the human eye to guide the surgeon in the selection of parameters of the procedure.

#### **16.2 Brief History of Adaptive Optics**

Starting in 1978, the principle of wavefront measurement and compensation of wavefront aberrations was adopted at the University of Heidelberg for ophthalmic applications. The technique is based on Shack-Hartmann sensing, measuring the optical path of light rays through the eye to detect all aberrations at all points in the optical system of the human eye. Adaptive optical systems were developed to measure and compensate wave aberrations of the human eye with closedloop control [1–2].

As early as 1982 at the sixth International Conference on Pattern Recognition (ICPR) in Munich, Germany, wavefront sensing and adaptive optical closed-loop control was proposed for aberration-free imaging and vision testing: "The system essentially provides an elimination of optical eye aberrations which diminish the fundus image quality (Table 16.1). On the other hand by active focus control and/or wavefront sensing the aberrations of the human eye like astigmatism of the cornea and spherical aberration of the lens can be measured" [1]. In another publication, the concept of achieving 20/10 visual acuity by adaptive optical visual stimulus generation was described: "In the apparatus of this invention the illuminating laser beam is generally

**Table 16.1** History of adaptive optics closed-loop control


widened to a diameter of between 3 mm and 4 mm, in exceptional cases even still wider, and by compensating for all existing aberrations it is possible to focus the laser beam on a spot of a minimal diameter between 2 and 3 micrometers on the retina. This permits a resolution of more than 5000 image points per scan line, that is, it is possible for example to resolve and represent individual receptors in the fovea. Since the use of optical image focusing under adaptive control produces data on the wavefront of the imaging laser beam, the apparatus of this invention enables the refractive index profile within the eye to be reconstructed, permitting for the first time an automatic determination of the refraction at high accuracy" [2].

At the same time, an adaptive optical control system was devised and built based on modal actuator control (Fig. 16.2). In modal phase compensation, the wavefront aberration is expanded into an orthonormal system based on Zernike polynomials. In addition, the original concept included a Karhunen-Loève wave expansion, in

**Fig. 16.2** Closed loop adaptive optical system with modal actuator control (adapted by permission from SPIE: [1, 2])

order to account for partial wavefront distortions with high spatial frequency content [4].

In 1989, Dreher along with Bille and Weinreb attempted to measure and correct monochromatic aberrations using an active mirror to correct ocular aberrations and provided improved depth resolution retinal images using scanning laser ophthalmoscope (SLO) [5]. The clinical adoption of Shack-Hartmann wavefront sensor to measure eyes wave aberration was demonstrated in early 90s at the University of Heidelberg, in Bille's laboratory with Liang working as a graduate student [6]. This led to the key development of closed-loop adaptive optics systems for ophthalmology. Later, Liang with Williams at the University of Rochester built the first closed-loop adaptive optics system that could correct higherorder aberrations of the eye and achieve a supernormal vision and visualization of single cells in the human retina [7]. Thereafter, wavefrontguided laser refractive surgery was introduced as a clinical treatment for refractive correction [8]. Although there are many methods to measure the ocular aberrations, Shack-Hartmann is considered the finest method to precisely measure the aberrations in the human eye and is generally employed in clinical aberrometers. The wavefront technology advancement in recent years has advanced to produce accurate measurements and diagnoses of higher-order aberrations. And this leads to wavefront designed glasses, contact lenses, intraocular implants, and wavefront-guided laser vision correction. Also, fundus cameras, SLO, Optical coherence tomography (OCT) and two-photon ophthalmoscopy (TPO) have incorporated adaptive optics to achieve diffraction-limited imaging system [9–11].

#### **16.3 Higher-Order Aberrations**

Any imperfection in the eye will lead to distorted images and decrease in visual performance. This is commonly referred to as optical or wavefront aberrations. The lower order aberrations and higher-order aberrations are the two types of wavefront aberrations of the eye. The lower order aberrations such as myopia, hyperopia, and astigmatism can be corrected with glasses, contact lenses or refractive surgery. Higher-order aberrations are the imperfections that cannot be corrected by these technologies. There are some degrees of higher-order aberrations present in each eye and this can be measured using the wavefront technology. The wavefront aberrations of the human eye are generally defined mathematically by a series of Zernike polynomials (for a detail description of Zernike polynomials, see Chap. 18).

In Zernike polynomials, the lower order aberrations are represented as the second-order and the higher-order aberrations are above secondorder. For example, the coma and trefoil are the third-order aberrations and the spherical aberrations are fourth-order Zernike terms. The lower order aberrations contribute 85% of all aberrations in the eye, whereas the higher-order aberrations contribute only 15% of the aberrations. The higher-order aberrations are more complex than the lower order aberrations and these aberrations results in difficulty seeing at night, glare, halos, blurring, starburst patterns or double vision [12]. These aberrations can be measured using the wavefront sensor. The refractive index and linear path variations are measured with a wavefront sensor and these generate a map that shows relevant retardation that a plane wave undergoes as it traverses the optics of the eye. The clinicians are familiar with the wavefront map and the Zernike polynomial expansion. The coefficient of each Zernike term reveals the total root mean square (RMS) error. In Fig. 16.3, the pseudo-3D graphics of the Zernike functions is shown.

#### **16.4 Principle of Aberration Measurement**

In recent years, basically, three types of aberration measurement devices have been developed: The thin beam ray tracing aberrometer, the Tscherning aberrometer, and the Shack-Hartmann method. In Fig. 16.4, the principle of operation of the Shack-Hartmann wavefront sensor is demonstrated. On the left-hand side, the processing of an ideal plane wave is depicted. The incident plane wave results in a square grid of spots in the focal plane of the micro-lens array. On the right-hand side, the imaging of a distorted wave is shown. The distorted wavefront causes lateral displacements of the spots on the CCD array. From the spot pattern, the shape of the incident wavefront can be reconstructed based on appropriate curve fitting algorithms. More than 25 years ago the first detailed study of the application of wavefront technology for the assessment of the refractive properties of the human eye was performed. From the wavefront measurements, Zernike coefficients were calculated and the wavefronts emerging from the eyes tested were reconstructed. Figure 16.5 shows the equal level contour maps of a human eye [6, 13, 14]. On the left-hand side of Fig. 16.5 the overall wavefront is presented, whereas on the right-hand side only the higher-order aberrations, i.e. the third- and fourth-order Zernike coefficients are depicted. In this work, the phase error that cannot be corrected by conventional spectacles was specified as the higher-order aberrations of the eye. In Fig. 16.6 the principle of the measuring process of the WaveScan™ instrument is shown. The ideal wavefront is represented as a regular grid of spots coded with green color. The distorted wavefront is given by an irregular grid of spots coded with red color.

#### **16.5 Definitions of Optical Imaging Quality**

For the description of the performance of an optical system, there are several parameters in use. Some of them are applied to the human eye as well. A short overview of some scales used in ophthalmology will be given in this section.

#### **16.5.1 Root Mean Square (RMS)**

The RMS of the wavefront is a very simple criterion. It is nothing but the integrated root mean square of the differences between the wavefront surface and the mean value of the surface. The complex phenomenon of aberration is packed

**Fig. 16.3** Zernike polynomials up to fourth order and an example of sixth order control

into a single number. This makes it so convenient for ophthalmology. The RMS can be calculated directly from the Zernike polynomials.

For the calculation of RMS, we refer to Zernike polynomials of second order minimum. The zero order is not measured at all. The first order gives information only about the tilt, which is connected to the position of the eye. It does not supply any information about the characteristics of the eye itself. The Zernike polynomials are orthogonal and the zero order term is set to zero. So the mean value of the wavefront

**Fig. 16.4** Left: Incident plane wave resulting in a square grid of spots. Right: Distorted wavefront causes lateral displacement of spots

**Fig. 16.5** Equal level contour map of a human eye. Left: Overall wavefront, Right: Higher order aberrations (third- and fourth-order Zernike's) (adapted by permission from Optical Society of America: [6])

**Fig. 16.6** Principle of WaveScan™ measurements. Ideal wavefront represented as a regular grid spots coded with green color. Distorted wavefront given by an irregular grid of spots coded with red color

is zero, too. The RMS is thus simply the mean squared value of the wavefront over the pupil.

$$\begin{split}RMS &= \sqrt{\frac{\int\_{0}^{r} \int\_{0}^{2\pi} W\left(\rho,\theta\right)^{2} \rho d\rho d\theta}{\int\_{0}^{1} \int\_{0}^{2\pi} \rho d\rho d\theta}} \\ &= \sqrt{\frac{1}{\pi} \int\_{0}^{12\pi} W\left(\rho,\theta\right)^{2} \rho d\rho d\theta} \end{split} $$

In taking mean values of the Zernike polynomials the integral can be replaced by a sum of the weighted coefficients. For a real pupil size, the integration will be from 0 to r.

$$RMS = \sqrt{\frac{\int\_{0}^{r} \int\_{0}^{2x} W\left(\rho, \theta\right)^{2} \rho d\rho d\theta}{\int\_{0}^{r} \int\_{0}^{2x} \left(\sum\_{i=0}^{order} c\_{i} Z\_{i}\left(\rho, \theta\right)\right)^{2} \rho \, d\rho \, d\theta}}$$

$$= \sqrt{\frac{1}{\pi r^{2}} \sum\_{i=0}^{order} c\_{i}^{2} \int\_{0}^{2x} \left\|\int\_{0}^{2} \left(\rho, \theta\right) \rho \, d\rho \, d\theta}$$

$$= \sqrt{\frac{1}{\pi r^{2}} \sum\_{i=0}^{order} c\_{i}^{2} Z\_{i}^{2}}$$

With *Zi* ¢<sup>2</sup> = weighting coefficient for each Zernike. It depends from the radial and angular order.

$$Z\_i^{'2} = \frac{1}{\left(2 - \delta\_{\!\!\! }\right) \* \left(n+1\right)}\\
\text{with } i = \frac{n\left(n+1\right)}{2} + \frac{n-l}{2} + 1$$

The RMS can be calculated simply as a root of the sum of coefficients. The peak to valley (PTV) is closely connected to the RMS. While the PTV depends heavily on just two extreme values, the RMS is a kind of mean value received from the complete set of data points. This makes the RMS much more stable against deviations.

#### **16.5.2 Optical Aberration Index (OAI).**

The OAI is defined as

$$OAI = \mathbf{l} - e^{\left(-\text{RMS}\right)t}$$

The OAI has values between zero and one. Zero stands for an optical system that is perfect and 1 for infinite aberrations. The RMS-value is given as a fraction of the wavelength of light. The OAI is very sensitive in the typical range for higher order aberrations. It was introduced as an even simpler scale for the optical quality of an eye.

#### **16.5.3 Modulation Transfer Function (MTF).**

A typical target for testing the quality of an optical system consists of a series of alternating black and white bars of equal width with a contrast of 1. These targets are connected to a Snellen E chart, as used in ophthalmology. The MTF gives the contrast of the image (as a percentage of the contrast of the object) in dependence of the frequency. The contrast is defined by:

$$Conttrast = \frac{I\_{\text{max}} - I\_{\text{min}}}{I\_{\text{max}} + I\_{\text{min}}}$$

The MTF may be compared to the aerial image modulation (AIM) curve. This curve shows the smallest amount of modulation a sensor like a CCD camera or the retina is able to detect. The AIM is a function of the frequency used as well. As the MTF normally goes down with frequency increasing, the AIM increases with frequency. The point of intersection gives the resolution.

$$MTF\left(\nu\right) = \frac{M\_i}{M\_o} = \frac{2}{\pi} \left(\varPhi - \cos\varPhi \cdot \sin\varPhi\right)$$

With

$$\begin{aligned} \label{eq:G} \Phi &= \arccos \frac{\lambda \nu}{2NA} \\\\ \nu &= \text{frequency in} \frac{\text{cycles}}{mm} \\\\ NA &= numerical \; aperture \end{aligned}$$

l= *wavelength*

#### **16.5.4 Point Spread Function (PSF)**

The point response of an optic should still be a point. Even if the optic is perfect the response is a pattern due to diffraction. In a real system, the aberrations widen the image up to a spot. The spot is represented by a two-dimensional distribution. This is described by the PSF.

If the aberrations are smaller than 0.25λ (Rayleigh criterion) the diffraction pattern provides a good description of the PSF. Up to about 2λ, it is appropriate to consider the manner in which the aberration affects the diffraction pattern. For larger wavefront aberrations illumination described by ray tracing is sufficient for description.

#### **16.5.5 Application of the Performance Indices in a Normal Human Eye**

part, a color-coded presentation of the wavefront is shown. The RMS of 0.23 μm results in an OAI of 0.205. On the middle part, the MTF is plotted, as well as the diffraction-limited MTF for a 6 mm pupil. On the right part, the PSF is graphically presented.

#### **16.6 Principle of Closed-Loop Adaptive Optical Control**

In Fig. 16.8, the principle of closed-loop adaptive optical control is schematically demonstrated. The wavefront of light which is distorted due to optical aberrations of the optical system, e.g. the human eye, is measured by a wavefront sensor. The reconstructed wavefront is dithered on a wavefront controller, e.g. an active mirror in order to compensate for the optical aberrations. Thus, through an aberrating medium, an aberration-free optical image can be achieved.

**Fig. 16.7** Different representations of image quality of human eye

**Fig. 16.8** Principle of closed-loop adaptive optical control

In Fig. 16.7, the different performance indices are presented for a normal human eye. On the left

**Fig. 16.9** Adaptive optics in astronomy: (left) speckle pattern, (right) sharpened picture

#### **16.6.1 Adaptive Optics in Astronomy**

Wavefront technology and closed-loop adaptive optical control were originally developed for astronomical applications. It was used to measure wavefront distortions that occurred when light traveling through the atmosphere entered an optical telescope. By applying closed-loop adaptive optical controls the speckle patterns of the star images could be improved towards diffractionlimited performance. In Fig. 16.9, the principle of operation of an adaptive optical closed-loop system on an optical telescope is demonstrated. In the left image the speckle pattern of an astronomical object, aberrated by the turbulent atmosphere is shown. In the right image, the sharpened picture after engagement of the closed-loop adaptive optical control demonstrating a double star image at high spatial resolution is depicted. The measurements were performed at the Calar Alto Optical Telescope operated by the Max Planck Institute for Astronomy, Heidelberg [15].

#### **16.6.2 History of Adaptive Optics at the University of Heidelberg**

In Fig. 16.10 a number of active mirrors and wavefront sensors developed at the Kirchhoff Institute of Physics, University of Heidelberg in the last 40 years are depicted. The first generation foil mirror was successfully applied for the real-time compensation of aberrations of the human eye for high-resolution imaging of the retina [5]. In 2002, closed-loop operational results of the second generation foil mirror are reported [16]. In early 2000s a multi-segment microchip mirror was developed, exhibiting approximately 100,000 mirror facets, each able to slightly shift the phase of a local component of the wavefront in order to compensate for the detected wavefront error. In the lower part of Fig. 16.10 two different realizations of Shack-Hartmann wavefront sensors are shown. On the left-hand side, a cylindrical lens array with CCD detector is photographed which was applied for the first time to measure the aberration of the human eye in real time [6]. On the right-hand side, a custom ASIC chip detector which is used in combination with a custom micro-lens array is shown [17]. The ASIC chip is divided into a matrix of clusters, consisting of photodetectors and signal processing circuitry. By analogous signal processing in winner-takes-all circuitry, the highest photocurrent is detected and position is calculated. The data obtained are evaluated in real time for reconstruction of the wavefront of the light.

#### **16.7 Demonstration of Adaptive Optics Aberrometer**

An active matrix mirror is used in the device (see Fig. 16.11). It is an array of 200 × 240 micromirrors (40 μm × 40 μm each). Each one of the mirrors can be lowered up to 400 nm independently. The mirrors can only be lowered without the facility of tilting. With this technique, wavefronts can be corrected up to the double height of deflection—more than one wavelength. By using the 2π-phase wrapping method (Fig. 16.12) the range of the wavefront deformations to be corrected can be enlarged by far. The 2π-phase wrapping method makes use of the phase properties of light. Sag of 2π between two neighboring mirrors has no effect on the direction of the light and can be subtracted without any effect on the wavefront. So, the range of movement needed for the correction of any wavefront-deformation can be reduced to λ/2. In fact, the use of the mirror is limited to the light of one wavelength when using the 2π-phase wrapping method [16].

For an objective test of the active mirror, a test device was constructed (Fig. 16.13) and this device allows to measure the phase plate and look through it into the instrument at the same time. From the camera, at the test device, an image of the target is obtained. For the measurements presented here, a target was used with 1′ apex angle, corresponding to a Visual Acuity (VA) of 1.0.

In Fig. 16.14, the corrections of phase plates are depicted. The wavefront map, the PSF and the real images of the Siemens-star targets are shown, respectively. On the left-hand side, a phase plate with fourth order spherical aberration is evaluated, on the right-hand side; a phase plate with third order trefoil aberration is imaged.

In Fig. 16.15, case studies of patients are exemplified. For patient JS (OD) an improvement of the RMS values from 0.397 μm to 0.149 is achieved, for patient RB (OD) the RMS value changes from 0.336 μm to 0.113 μm. The patients gain one line on the Snellen chart, from VA 20/16 to VA 20/12.5, and from VA 20/20 to VA 20/16, respectively.

**Fig. 16.10** History of adaptive optical elements, as developed at the University of Heidelberg

**Fig. 16.13** Left: Test device for the active mirror. Light entering from the left through the phase plate is divided by the dichroic beam splitter cube. The aberrations get mea-

sured in the right arm. The bottom arm is used to record an image of the target. Middle and Right: Visual Acuity Chart

In Fig. 16.16, the reduction of higher-order RMS values of human eyes with adaptive optics compensation is visualized. The effects on various orders, from second to sixth order, are plotted. The effect on RMS total (brown graph) amounts to an improvement by a factor of two, from RMS 0.3 μm to RMS 0.15 μm.

#### **16.7.1 Clinical Prototype Adaptive Optics Aberrometer**

In Fig. 16.17, an adaptive optics aberrometer is depicted, with which the first clinical studies on adaptive optics vision testing were performed, based on adaptive optics technology [18]. The study was performed at eye clinics near Würzburg, Germany. The reproducibility and accuracy of the measurements were assessed. Through adaptive optics compensation, the aberrations of higher orders, like e.g. coma and spherical aberration could be reduced from 0.3 μm to 0.1 μm RMS error. The limit of resolution of the instrument was less than 0.1 μm, corresponding to a measurement accuracy of 10−<sup>5</sup> , considering the length of the eye of approximately 20 mm, achieved on living eyes. As an interesting result, we found that approx. Half of the patient population exhibited an amount of

**Fig. 16.14** Left: Correction of phase plates of fourth order spherical aberration. Right: Correction of phase plates of third order trefoil aberration. *WF* wavefront map, *PSF* point spread function, *Image* real image

**Fig. 16.15** Left: Case study—Patient JS (OD). Right: Case study—Patient RB (OD)

**Fig. 16.16** Higher-order aberration reduction in human eyes with the microchip mirror

**Fig. 16.17** Setup of adaptive optics aberrometer

0.3 μm of higher-order optical aberrations of the eye, which play a considerable role with regard to visual acuity, for the whole human population, and which cannot be corrected with conventional eyeglasses or contact lenses.

#### **16.7.2 A Case Study on a Refractive-Surgical Patient with Clinical Prototype**

In Figs. 16.18 and 16.19 the correction of higherorder wave aberrations for a refractive surgical patient is demonstrated. In Fig. 16.18 on the righthand side, the uncompensated coma modeled into a phase plate is shown, resembling the aberration of a human eye before therapeutic custom ablation correction. The PTV difference amounts to 2 μm, the RMS error to 0.72 μm. The patient achieved the best spectacle-corrected visual acuity (BSCVA) of 20/40. On the left-hand side of Fig. 16.18 the appearance of the WaveScan™ tunnel target is blurred, correspondingly. In Fig. 16.19, the compensated wavefront and the target image are depicted. By closed-loop adaptive optical control, the RMS error can be reduced to 0.07 μm, corresponding to one-tenth of a wavelength of light. The WaveScan™ tunnel target image is sharp-

**Fig. 16.18** Phase plate simulating human eye with high coma. Left: defocused image, Right: uncompensated wavefront

**Fig. 16.19** Compensation of high coma aberration. Left: Focused image, Right: compensated wavefront

ened, accordingly. Indeed, the patient's vision was improved by a therapeutic custom ablation procedure to nearly perfect 20/12.5 performance.

#### **16.8 The Limits of Human Vision**

In Fig. 16.20, the results of the wavefront measurements for a left eye (OS: "Oculus Sinistrus", left eye) of patient TK (OS) are depicted. A modest amount of coma, a third order aberration, is present. The RMS error of the higher-order aberrations of this eye amounts to 0.656 μm. About 20% of all human eyes exhibit such an RMS error. For older patients, this percentage is substantially higher, measured at a pupil size of approximately 6 mm in diameter. After engaging the micro-mirror device, i.e. with adaptive optics compensation, the RMS error can be reduced to 0.166 μm, corresponding to approximately λ/5. At the same time, the patient's visual acuity is enhanced. In this case, the visual acuity improves from 20/20, i.e. normal vision, to 20/16, i.e. the patient can see one additional line on the Snellen chart, i.e. the 20/16 line.

In Fig. 16.21, research results are shown, which exemplify the limits of human vision. The MTF is plotted in dependence of spatial frequency, measured as line pairs, respectively cycles, per degree of visual angle, i.e. cycles/ degree, visualizing the optical quality of a human eye. Regarding the MTF characteristic, a spatial frequency of 30 cycles/degree corresponds to 20/20 vision, corresponding to a smallest resolvable visual angle of 1 arcminute. A smallest resolvable visual angle of 0.5 arcminute results in 20/10 vision, corresponding to a spatial frequency of 60 cycles per degree. For reference, the diffraction-limited MTF is sketched for a 3 mm pupil (blue color) and a 6 mm pupil (black color). The curve, depicted by the red broken line, characterizes the MTF of the left eye of patient TK (OS), uncompensated. With a 6 mm pupil, patient TK (OS) exhibits an MTF of 0.15 at 30 cycles/ degree, corresponding to 20/20 vision. A human eye needs a contrast of approximately 10%, in order to perceive a fine grid of lines, spaced at the related spatial frequency. After adaptive optics compensation, the green-broken MTF curve results, with an improvement of the image contrast by a factor of 5 at 30 cycles/degree. As described previously (see Fig. 16.20), patient TK (OS) reached 20/16 vision, after adaptive optics compensation. In the case of patient WE (OD), (OD: Oculus Dextrus, i.e. right eye), (red solid line) only minute higher-order aberrations are present; nevertheless the adaptive optics compensation

**Fig. 16.20** Case study: Patient TK (OS)

#### 353

**Fig. 16.21** Visual acuity, MTF

results in a considerable amount of improvement of the MTF, as such that the value of the MTF stays above MTF = 0.1 even at 120 cycles/ degree, corresponding to 20/5 vision, amounting to four times normal human vision. For patient WE (OD), uncorrected vision amounted to 20/16, and reached 20/10, after compensation. Human eyes can reach at maximum 20/10 vision, limited by the neuronal threshold of visual acuity.

Campbell and Green published a seminal paper on the neuronal limit of human vision in 1965 [19]. The neuronal limit of human vision is related to the size of the retinal photoreceptors. In the fovea, the retina exhibits cones with smallest dimensions of 2.2 μm, corresponding to a visual angle of 0.5 arcminutes, i.e. visual acuity of 20/10. At the limit of resolution, the lateral inhibition is disabled, i.e. the initial neuronal processing of the retina is rendered ineffective. An individual neuronal sensitivity curve is sketched in Fig. 16.21 in yellow color. Related to this individual characteristic, a contrast of 0.8 would be necessary to achieve a 20/10 vision. A contrast of 0.8 at 60 cycles/degree is above the diffraction-limited characteristic. Thus, the achievable improvement of the visual acuity through adaptive optics compensation of the optical aberrations of the human eye is limited by the individual neuronal threshold. Our clinical studies have shown that the neuronal threshold characteristics can be enhanced by adaption, based on a lengthy learning process.

However, as demonstrated above (see Fig. 16.21, MTF characteristic of patient WE(OD)), the optical quality of human eyes can be optimized to a smallest PSF of approximately 1.25 μm spot-size ('airy-disc'), which is essential for high-resolution imaging of the retina of the human eye (see Chap. 17).

#### **16.9 Aberration-Free Retinal Imaging**

Retinal imaging has been an integral part of every ophthalmic examination for early diagnosis and follow-up the retinal diseases. Fundus cameras, SLOs, and OCT systems provide a macroscopic view of the retina, however, these instruments lack in transverse resolution to reveal the microscopic structures of the retina. Improving the resolution of the retinal images has always been the concern for the researchers [20]. The main limitation is the pupil size, where the diffraction dominates the PSF in small pupil size and the aberrations in large pupil size [21]. Adaptive optics facilitated the retinal imaging techniques by improving the image quality. Adaptive optics was first applied in a fundus camera by Liang et al. in 1997 to resolve the cone photoreceptors [7]. Since then adaptive optics has been implemented by many researchers for aberration-free retinal imaging. In 2002, Burns et al. used phase plates with confocal SLO for higher-order aberrations and achieved a 26% increase in contrast of the retinal blood vessels [22]. Meantime, Roorda et al. presented the first adaptive optics confocal scanning laser ophthalmoscope (cSLO) using a Shack-Hartmann sensor providing a real-time microscopic view of the human retina. The resolution achieved was 2.5 μm lateral and 100 μm axial compared to 5 μm lateral and 300 μm axial in conventional SLO [11]. Further, the OCT has also been integrated with adaptive optics for aberration-free retinal imaging to improve the axial and lateral resolution. Adaptive optics was integrated with ultrahigh resolution OCT and spectral domain OCT [10, 23].Likewise, the adaptive optics has been incorporated with TPO aimed at detecting the early onset of retinal diseases [9].

In Fig. 16.22, the adaptive optics improvement of the PSF through compensation of the

**Fig. 16.22** Adaptive optics Improvement of PSF through compensation of the optical aberrations of a human eye. *WF* wavefront map, *PSF* point spread function

**Fig. 16.23** The arrangement of the three cone classes (L/M-, S-cones) in the living human eye

higher-order optical aberrations of a human eye is demonstrated. The RMS wavefront error of the uncorrected wavefront was 0.339 μm, after compensation the RMS wavefront error of 0.085 μm was measured. The uncorrected and corrected PSFs are depicted in the lower part of Fig. 16.22.

Human color vision depends on three classes of receptor, the short- (S), medium- (M), and long- (L) wavelength-sensitive cones. These cones are interleaved in a single mosaic so that, at each point on the retina, only a single class of cone samples the retinal image. As a consequence, observers with normal trichromatic color vision are necessarily color blind on a local spatial scale. The limit this places on vision depends on the relative numbers and arrangement of cones. Although the topography of human S-cones is known, the human L- and M-cone sub mosaics have resisted analysis. Adaptive optics, a technique used to overcome blur in groundbased telescopes (see also Fig. 16.8), can also overcome blur in the eye, allowing the sharpest images ever taken of the living retina [24]. In Fig. 16.23, the arrangement of the three cone classes (L/M-, S-cones) in the living human eye are depicted. Adaptive optics and retinal densitometry are combined, to achieve aberration-free images of the living human retina. The proportion of L to M cones is strikingly different in two male subjects, each of whom has normal color vision. The mosaics of both subjects have large patches in which either M or L cones are missing. This arrangement reduces the eye's ability to recover color variations of high spatial frequency in the environment but may improve the recovery of luminance variations of high spatial frequency.

#### **16.10 Wavefront-Guided Laser Refractive Surgery (CustomVue)**

Wavefront-guided laser refractive surgery (CustomVue) was founded with a pilot study at the Augenpraxisklinik (Eye clinics) Heidelberg in the year 2000, performing the first fifty wavefront-guided eye corrections, world-wide. Subsequently, five hundred patients were treated in a multi-center FDA study in the United States.

In Fig. 16.24 the results of the FDA study are depicted. The blue bars of the bar diagram describe the visual acuity before surgery (BSCVA: "best spectacle-corrected visual acuity", see Fig. 16.24). All five hundred patients of the study group exhibited 20/20 vision, 59% resp. 6% reached 20/16 vision resp. 20/12.5 vision. The green bars of the bar diagram present the visual acuity after wavefront-guided refractive laser surgery (UCVA: uncorrected visual acuity (VA), see Fig. 16.24). After

**Fig. 16.24** Improving vision with custom refractive procedures

treatment, 100% of the patients maintained at least 20/20 vision, the number of patients with 20/16 vision improved to 74%, and the number of patients with 20/12.5 vision reached 32%, i.e. one hundred and fifty patients of the study group of five hundred patients reached the level of optically perfect vision. Since then, wavefront-guided laser refractive surgery has been successfully performed on many millions of patients.

#### **16.11 Summary**

The introduction of wavefront technology in ophthalmology allows determining the optical aberrations of the human eye, far beyond the sphero-cylindrical refractive error. Based on WaveScan technology the reproducibility and accuracy of the new technique were established in worldwide multicenter clinical studies, of which, one of the most powerful clinical applications is the wavefront-guided refractive surgery. In this chapter, it was demonstrated that closed-loop adaptive optical control allows for improved spatial resolution of aberration measurements, increasing the resolution limit by two orders of magnitude over, e.g. Shack-Hartmann technology. Also, adaptive optics has proven its ability to resolve the microstructures of the retina by correcting the optical aberrations down to diffraction-limit. The aberration-free retinal imaging with adaptive optics will improve our understanding of the visual system in the normal and diseased eye.

**Acknowledgement** The historical background and the basics of adaptive optics aberration measurements were published previously in reference [25].

#### **References**


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Adaptive Optics for Photoreceptor-Targeted Psychophysics**

**17**

Wolf M. Harmening and Lawrence C. Sincich

#### **17.1 Seeing Cells of the Retina**

The eye has a clear advantage over other sensory organs when it comes to directly observing its neurosensitive structures non-invasively, for it is already built as an image forming system. The eye's large aperture and transparent cornea and lens allow a nearly unobstructed view of the retina lining the back of the globe, where the neuronal tissue resides and can be seen with even simple optical instruments. In this chapter we will focus on a relatively recent experimental approach that makes use of that access to study the human retina at the most elementary level of the single cone photoreceptor. By directly linking targeted stimulation of individual photoreceptors to subjective visual perception in psychophysical experiments, visual processing mechanisms operating at the cellular level in the retina can be uncovered.

The technological hurdles to be overcome for cell-based psychophysics were achieved by building upon continued innovations in ophthalmoscopic imaging. Ophthalmoscopy, a field

L. C. Sincich Department of Optometry and Vision Science, University of Alabama at Birmingham, Birmingham, AL, USA

devoted to viewing the inside of the eye, began with the invention of the direct ophthalmoscope in 1851 by Hermann von Helmholtz, and initiated a close relationship between imaging innovations with ever increasing fidelity on the one hand and milestone discoveries in vision research and clinical ophthalmoscopy on the other [1]. From the onset, Helmholtz noted the optical imperfections in the eye that make closer examination challenging [2]. Due to natural irregularities in shape and refractive index of the optical media, the eye introduces optical aberrations that fundamentally limit the quality of the acquired images in an ophthalmoscope [3]. Nevertheless, in eyes with minimal native aberrations, single photoreceptors can be readily observed. By taking photographs of the retina through the pupil of their eyes, the first *in vivo* images of the photoreceptor mosaic were produced in animals with especially large receptors, the garter snake (*Thamnophis* spp.) [4] and cane toad (*Bufo marinus)* [5]. The first *in vivo* images of human photoreceptors were captured with a custom digital fundus camera [6]. Here, a key prerequisite to seeing single cells was a careful correction of defocus and astigmatism to sufficiently improve the optical quality of the image forming process.

Concurrently, other technological breakthroughs in retinal imaging were the development of optical coherence tomography (OCT) [7], and the scanning laser ophthalmoscope (SLO) [8]. While OCT readily creates cross-sectional

W. M. Harmening (\*)

Department of Ophthalmology, University of Bonn, Bonn, Germany

images of the retina along the optical axis in an interferometric approach, confocal SLO technology allows *en face* images of the retina by detecting backscattered light reflected from a transverse plane of the retina. Both OCT and SLO are able to resolve the retinal mosaic of photoreceptor cells in young eyes with clear ocular media when both defocus and astigmatism are minimal, and typically at perifoveal retinal locations where the photoreceptors are in the range of 8 μm in diameter [9, 10]. Similar near diffraction-limited resolution in eyes with more demanding natural aberrations was only achieved by equipping ophthalmoscopy with a set of tools first developed in astronomy to improve the resolution of groundbased telescopes: adaptive optics (AO) [11–13] (see also Chap. 16).

The core of an AO system is an adjustable wavefront correcting element, typically a deformable mirror, and was first implemented in an SLO in 1989 [14]. Without a wavefront sensor, correction of previously determined low order aberrations (defocus and astigmatism) could be performed; however, image quality did not improve by a large margin, mostly because higher order aberrations were still left uncorrected, a prominent issue when the imaging beam fills the aperture of a dilated pupil. A key advance was introduced in 1994 by employing a Shack-Hartmann wavefront sensor to continuously measure ocular aberrations [15], and the first AO-ophthalmoscope that was able to correct higher order aberrations based on measurement of the ocular wavefront in closed-loop operation was demonstrated in 1997 [16]. The introduction of adaptive optics in ophthalmoscopy marked the birth of a new generation of high resolution retinal imaging devices, and today, 20 years after its first appearance, AO for the eye is available, partly commercially, in three main imaging modalities: fundus photography [16, 17], optical coherence tomography [18, 19], and scanning laser ophthalmoscopy [20, 21].

Here we provide an overview of an experimental approach that uses adaptive optics combined with scanning laser ophthalmoscopy (AOSLO) to study the function of individual photoreceptors in the human retina. AOSLO systems are primarily deployed in experimental and clinical research settings to study microscopic retinal structure with high lateral resolution, and as such, are often used as a pure imaging system [13, 22, 23]. An AOSLO can also be utilized as a microscopic stimulation platform where controlled amounts of light are briefly flashed at precise retinal locations to stimulate an area approaching the size of a single photoreceptor while a subject responds in a psychophysical task [24, 25]. This specific experimental technique has emerged during the last 10 years, and there is currently no other method available with comparable optical precision, spatial control of delivered stimuli, and freedom in experimental options.

In the following sections we will first briefly review key retinal factors that have an impact on retinal image formation and vision before we turn to the optical and technical requisites needed to experimentally control the activity of single cone photoreceptors seen during AOSLO imaging. We will then illustrate a few empirical findings, to demonstrate what can be learned about the visual system in healthy and diseased retinas when photoreceptor function is probed *in vivo*.

#### **17.2 Retinal Factors Interacting with Photoreceptor Imaging and Function**

Photoreceptors are the first cells that transform the stream of photons impinging on the retina into neurochemical signals that mediate vision to the brain [26, 27]. The photoreceptors sit in the most posterior layer of the retina. Consequently all incoming light has to travel through the overlying retinal tissue comprised of blood vessels and a dense network of neurons and interneurons before it can be absorbed for the use of vision. Here we briefly review pertinent aspects of the retina's cellular composition and how it interacts with the incoming light before we delve into how visual function can be approached by the activation of single photoreceptors.

As the light sensitive neuronal tissue of the eye, the human retina lines the inner walls of

**Fig. 17.1** The retina and photoreceptor mosaic. (**a**) Hematoxylin and eosin stain of a human retina in cross section. Light entering the eye (arrows) passes through dense layers of neuronal cells and blood vessels (asterisks) before it is absorbed in the outer segments of the photoreceptors. *ILM* inner limiting membrane, *NFL* nerve fiber layer, *GCL* ganglion cell layer, *IPL* inner plexiform layer, *INL* inner nuclear layer, *OPL* outer plexiform layer, *ONL* outer nuclear layer, *OLM* outer limiting membrane, *IS* photoreceptor inner segments, *OS* photoreceptor outer

the globe. Except around the fovea where it is thicker, most of the retina is ~300 μm in depth, containing three prominent cell body layers, two extensively interwoven synaptic layers, and four membranous layers (Fig. 17.1a). From anterior to posterior, following the light as it enters the eye, these strata are: (1) the inner limiting membrane, a thin Müller-cell derived layer separating the vitreous body from the retina; (2) the nerve fiber layer containing ganglion cell axons that carry the retinal signals to the brain; (3) the ganglion cell layer of ~one million cells consisting of more than 20 functionally distinct cell classes; (4) the inner plexiform layer, a synaptic network funneling bipolar and amacrine cell signals onto the ganglion cells; (5) the inner nuclear layer,

segments, *RPE* retinal pigment epithelium, *BM* Bruch's membrane, *CHO* anterior part of the choroid. (**b**) In a macaque retinal flat mount at ~3° eccentricity (~0.6 mm away from the foveal center) photographed with differential interference contrast microscopy, cone inner segments are visible as closely packed circular structures, interspersed with a few smaller rod inner segments.(**c**) *In vivo* AOSLO imaging of cones from another macaque, also at 3° eccentricity. Rods are unresolved in this image, but are likely to be nestled in the dark gaps between cones

containing the cell bodies of bipolar, amacrine, Müller, and horizontal cells; (6) the outer plexiform layer, formed by the synapses between multiple bipolar cell classes and photoreceptors; (7) the outer nuclear layer of the rod and cone photoreceptor cell bodies; (8) the outer limiting membrane, an epithelial structure providing mechanical strength; and finally (9) the photoreceptor inner and outer segments, the latter being where phototransduction takes place upon the absorption of light by the photopigments. The outer segments are embedded in the retinal pigment epithelium, a single layer of polygonal, highly pigmented cells, serving to absorb uncaptured light and carry out important phagocytotic functions of the visual cycle.

When light hits the eye, it has to pass through all anterior retinal layers before it is absorbed at the cone outer segments. Some of these layers introduce considerable light distortion, a fact that is capitalized on in OCT, where backscattered light is used to produce a useful cross section of the retina [7]. For AOSLO imaging and microstimulation, some of these light-tissue interactions are critical, and can be readily observed in the images collected. The most prominent sources of light distortion are the vessels of the inner retinal vasculature (Fig. 17.1a), manifesting as cast shadows in AOSLO images of the photoreceptor layer. It has been shown that cones that sit in such shadows have reduced sensitivity [28] and that the spectral sensitivity of penumbral cones is changed compared to their open-field neighbors [29]. The nerve fiber layer also scatters strongly and can be visualized in confocal AOSLO images [30], but its impact on cone targeted stimulation lights is yet unknown.

The strongest signal in AOSLO images stems from reflections originating in the photoreceptor layer (Fig. 17.1b, c). Each retina of a human eye carries an average of 92 million rod photoreceptors and 4.6 million cone photoreceptors [31]. These cells are unequally distributed in the retina, with foveal cones reaching an average peak density of ~200,000 cells/mm2 (declining rapidly to less than 10,000 cells/mm2 at ~6° eccentricity) while rod density peaks at ~150,000 cells/ mm2 near 10° eccentricity, with none in the fovea itself. Rods are just about uniformly sized at 2 μm diameter (at the inner segment) and respond to single photon absorptions, making them the foundation of scotopic vision [32, 33]. Many rods make contact with the same target bipolar cell, thus their combined output is amplified by signal convergence. In contrast, cones vary in size with eccentricity, from a foveal minimum of ~1.5 to ~8 μm at larger eccentricities. Phototransduction is more rapid in cones than in rods and is less sensitive, requiring ~50 photon absorptions to trigger a threshold response [34]. Cone mediated vision is specialized for high resolution, both temporally, to detect fast image motion, and spatially, for high acuity tasks. In trichromatic animals such as humans, each cone carries one of three types of photopigment that have distinct absorption maxima within the visible spectrum, and are hence called long (L), medium (M), and short (S) wavelength sensitive cones. The spatial arrangement of these cone classes is surprisingly variable in every retina, and were first revealed in human eyes using AO imaging [35, 36].

The cellular structure of the retina is probably one of the best studied sensory tissues, yet its functional architecture is still subject to active scientific investigation, in part because it is relatively difficult to study in the intact organism. What we will see in the next section is that by coupling AOSLO imaging to cone-targeted microstimulation, light can be shed on retinal function all the way from pre-receptor optical factors (such as the geometry of light capture by cone outer segments), to post-photoreceptor factors (such as horizontal cell feedback), and finally to visual perception (such as spatial and color vision).

#### **17.3 Resolving and Targeting Individual Photoreceptors for Visual Function Testing**

Studying retinal function can be done in many ways, with perhaps the simplest being the brief presentation of a spot of light somewhere in the visual field and asking the subject or patient "Did you see it?" With some added degree of spatiotemporal control, this is how one of the most fundamental visual function tests, perimetry, is performed in a standard clinical examination. In clinical perimetry as well as in most psychophysical studies of visual function, many photoreceptors are stimulated simultaneously even when small spots of light are used (e.g. Goldmann size I, 6.5 arcmin), and the visual percept is necessarily a product of summed receptor activity. When the goal is to characterize the function of individual photoreceptors the task becomes more challenging. Cell-resolved visual function testing demands that single photoreceptors are probed selectively. To target a single receptor, it needs to be visible, and in order to perform psychophysical testing the cell has to be stimulated with light repeatedly while limiting any light that might land on neighboring receptors [37]. Here we will discuss the main optical, physiological, and technical challenges that arise from doing such experiments, how they can be overcome, and what insights about vision can be discovered with a single cell targeted approach.

#### **17.3.1 Monochromatic Aberration Correction**

For cell-resolved retinal imaging and stimulation, optical challenges arise because the eye itself introduces a number of optical imperfections. Foremost of these is the optical quality of the dioptric apparatus of the eye, formed by the cornea and intraocular lens. These tissues are made up of cells that grow into place during development, and since such biological processes cannot always form perfectly, the lens and cornea do not mature with an optically ideal shape. Indeed, human eyes manifest particular lower and higher order monochromatic aberrations caused by irregularities in shape and refractive power of these structures [38]. Rays of light traversing the eye will thus be refracted irregularly, causing optical distortions that ultimately limit the quality of the retinal image. With small pupils, image blur due to diffraction outweighs aberrations, and the resolution of the eye is closer to diffraction limited. A larger pupil on the other hand adds to diffraction blur the many distortions related to ocular aberrations that are prominent when the incoming beam passes through larger portions of the cornea and lens [39]. As all current AO imaging systems use a large beam to achieve the best retinal imaging, having a correspondingly large pupil (via pharmaceutically induced dilation) sweeps in all these aberrations that need to be compensated.

For ophthalmoscopic imaging of the human eye, the limiting aperture is the pupil, the nearly circular opening formed by the iris, a muscular extension of the ciliary body, which can take on diameters anywhere between ~1 and 8 mm. This pupil sets the lateral resolution limit for imaging, typically defined mathematically by the point spread function (PSF). Assuming a perfect optical system, the form of the eye's PSF would be solely governed by diffraction, and therefore is a function of wavelength and optical aperture size (see also Chap. 19). It is here where we face a trade-off: large pupils allow higher spatial resolution (smaller PSFs) but at the same time increase the extent of ocular aberrations that need to be corrected. The latter hurdle is overcome by AO, enabling good resolution with aberrations minimized.

In a real eye, optical quality is also significantly reduced by factors other than dioptrics. Light scattering within the ocular media and tissues have to be considered, especially when aiming to deliver spatially restricted light to photoreceptors. Some degree of forward light scattering occurs within each of the tissues lying in the path in front of the photoreceptors, while light that reflects from tissues behind the photoreceptors will also diffuse small stimuli. Both types of scattering are considered straylight and will cause the ocular PSF to broaden and hence reduce contrast of retinal images [40, 41]. The amount of straylight is only weakly correlated with pupil size [42]. With current optical techniques, even AO, straylight cannot be compensated for and so will cause some portion of the light in micron-scale stimuli to be captured by non-targeted photoreceptors.

It should now be clear that the goal of an AO system for ophthalmic imaging is to reduce the contribution of monochromatic aberrations to approach diffraction limited lateral resolution while keeping within physiological pupil sizes [13, 43]. Correcting ocular aberrations is achieved by an optical element that can flexibly alter the wavefront of the beam entering the eye. This is typically achieved with either a deformable mirror or a liquid crystal spatial light modulator. Deformable mirrors can take on complex shapes due to mechanical deflection by an array of linear actuators [44]. Alternatively, liquid crystal light modulators can be used to alter the phase point-by-point of a transmitted beam [45]. In both cases, the phase of the incoming wavefront is locally adjusted to create a flattened wavefront when the beam is reflected from the back of the eye. Of course, this requires that the wavefront be known. There are several ways to measure the wavefront aberrations of the eye, and the most commonly used is a Shack-Hartmann sensor, an image sensor placed behind an array of micro lenses ([15], see also Chap. 16). Here, a beam of light reflected from the retina is imaged onto the lenslet array and the focused spots behind each lens are analyzed. Any offset in this array of spots from a perfect orthogonal grid indicate a distortion in wavefront shape. This error signal can be used to drive the corrective elements of a deformable mirror or light modulator to create a compensatory wavefront shape, either in a continuous closed-loop fashion, or discontinuous open-looped mode with repeated measurements [46]. Ocular wavefronts can also be measured via surface plasmon excitation with a pair of highly sensitive CCD sensors [47]. There are also sensorless methods, where wavefront corrections are emplaced directly from the acquired retinal image quality in an iterative algorithmic process [48].

For retinal imaging, AO is employed in three main imaging modalities: flood illumination ophthalmoscopy [16], in combination with an SLO [20, 21], or with OCT [18, 19]. All three modalities have complementary advantages for retinal imaging and all are of great clinical relevance since microscopic retinal structures become visible in the living subject [49–51]. These structures include the retinal nerve fiber layer and lamina cribrosa [22], blood flow with single blood cell resolution in the smallest capillaries [52–54], individual cone and rod photoreceptors [10, 55–57], retinal ganglion cells [58, 59], and the mosaic of the retinal pigment epithelium [60, 61]. For visual stimulation, the AOSLO is able to produce stimuli with the highest spatial precision. This is because imaging and stimulation can be spatially and temporally coupled within the same beam in an SLO [62–64].

#### **17.3.2 Stimulus Light Modulation and Image Motion Compensation**

Another set of concerns for cone-targeted stimulation is how to control when and where the stimulus will be delivered to the retina, and how to account for the often substantial retinal motion. In an AOSLO, images of the photoreceptor mosaic are created by collecting the reflected light from a focused beam that is raster scanned across a small square area on the retina over time (Fig. 17.2). This rastered image is produced by orthogonally deflecting the system's beam in a vertical and horizontal fashion. To achieve video frame rates at image sizes of, for example,

**Fig. 17.2** Schematic of AOSLO microstimulation. A visible stimulus is produced by high-speed acousto-optic modulation of a focused beam that is scanned across the retina. Because both imaging and stimulus wavelengths (840 and

543 nm, respectively) travel along the same beam path, stimuli can be positioned with high retinal contingency when eye motion and chromatic offsets have been compensated for by a set of control systems (see text for details)

512 × 512 pixels, one scan direction operates at 30 Hz, while the orthogonal deflection rate—for a square image aspect ratio—is the product of the numbers of lines in the scan field and the frame scan rate (512 ∗ 30 Hz ≈ 16 kHz). To create each video frame, each pixel is rendered by assigning the temporal signal in the light detectors to a spatial coordinate that corresponds to the current position of the imaging beam within the raster. The spatio-temporal relationship between beam position and acquired image pixel is not linear, because the high scan frequencies needed for the fast scan axis are produced by sinusoidal travel of a resonant scanning mirror. This means that the time needed for the beam to traverse the retinal space that corresponds to a single image pixel varies with beam position, with slower speeds close to the reversal points of each scan line. These sinusoidal distortions can be corrected by recording an image of an equi-spaced grid mounted in a model eye and then de-sinusoiding any ocular images on- or offline to achieve isotropic pixels in the captured frame [65, 66].

Retinal stimulation can be achieved in a multi-wavelength AOSLO by passing the imaging and stimulus light through independent acousto-optic modulators (AOMs) prior to their entry into the scanning and corrective portions of the optical path. Stimulus lights can thus be modulated in concurrence and with high spatial contingency to the retinal image. This correspondence allows custom stimuli to be delivered to the retina—pixel by pixel—at selectable locations within the imaging raster [67]. One difficulty for repeated stimulation at the same retinal location is that even if a subject's head is perfectly immobilized in front of the fixed beams of an AOSLO, any retinal locus remains an ever-moving target. This is due to the eye being in constant motion, even during steady fixation [68, 69]. While the subject is not aware of fixational eye movements and their amplitudes are small, they are large compared to the size of single photoreceptors. Typically, any visual stimulus is translated across tens to hundreds of cone photoreceptors during normal viewing [70, 71]. Because of the scanning nature of the AOSLO, the consequences produced by these small eye movements are readily observable and directly measurable in the acquired images [72, 73]. Fast software stabilization algorithms have been developed to measure image strip offsets and to correct the AOM timing signals accordingly [74, 75]. With this real-time stabilization of fixational eye movements, stimulus positions can be locked onto selected retinal locations, with a residual position jitter of about 0.15 arcmin, an area slightly smaller than the diameter of the smallest photoreceptors [76]. Saccades and microsaccades are too large to be corrected at present, so they must be ignored, or at least recognized when they occur in order to reject any compromised data.

Due to the enormous dynamic range of human photoreceptors in response to light, studying their visual function across that range is challenging because standard visual stimulation devices such as LCD monitors have limited luminance contrasts that can be displayed. By cascading two commercially available fibercoupled acousto-optic modulators (AOMs), i.e. feeding the output of one AOM to the input of a second AOM, a multiplicative extinction ratio can be achieved. Single light switch events as short as 50 ns with radiant power contrasts up to 1:1010 were demonstrated [77], which essentially spans the normal dynamic range of cone photoreceptors. Psychophysically, this contrast ratio was shown to be sufficient to stimulate single foveal photoreceptor cells with small and bright enough visible targets that do not contain a detectable background light. Background-free stimulation allows testing with custom adaptation lights, and the larger dynamic range in displayable light levels can drive photoreceptor responses in cones as well as in the scotopic regime of rod photoreceptors.

#### **17.3.3 Chromatic Dispersion Compensation**

To provide more freedom in stimulation options, AOSLO systems are designed with separate imaging and stimulation wavelengths [78, 79]. Typically, near-infrared wavelengths are used for imaging because of the high retinal reflectivity of that part of the spectrum (we should note that at the powers required to image the retina, this infrared light is often visible as a deep red square), while shorter visible wavelengths are used for stimulation, especially when color phenomenon are being investigated. Because the refractive power of the eye varies as a function of wavelength, this requires consideration of the effects of chromatic dispersion. Chromatic dispersion in the ocular media causes light of different wavelengths to focus in different axial planes and at different locations in the transverse plane, effects termed longitudinal chromatic aberration (LCA) and transverse chromatic aberration (TCA), respectively [80]. LCA has been shown to be relatively consistent between individuals, and can be compensated for by adjusting the relative vergence angles of the fiber optic point sources as they enter the system [79]. The direction and magnitude of TCA is more idiosyncratic, and depends on the position of the imaging and stimulation beams relative to the eye's achromatic axis [81, 82] and is not centered on the pupil [83]. Typical lateral shifts of beam positions due to TCA can easily exceed the diameter of single cones, thus it has to be carefully corrected for cone targeted stimulation. Because transverse beam position shifts can be directly measured by comparing the images formed with the employed wavelengths in the AOSLO, the combined effects of TCA and spatial offsets between imaging and stimulus beams can be compensated for each subject and for every eye position individually [79]. One consequence of this approach is that a significant amount of light (e.g. equaling a luminance of ~50,000 cd/m2 in 543 nm) is required to capture retinal images with the visible wavelengths. In practice, chromatic offset measurements have thus to be performed before or after psychophysical experiments, because these light levels are too bright for concurrent stimulation and to allow visual adaptation to return to normal states. This intermittent measurement leaves some uncertainty whether correction was performed accurately during an experiment, and thus the use of a bite bar to restrict head movements to a minimum is advised [24].

#### **17.3.4 Cone Targeted Psychophysics**

Finally, the combination of the technological innovations described in the preceding sections have enabled the study of *in vivo* psychophysical responses when single cones or groups of cones have been targeted for stimulation. Even when one cone is targeted for stimulation, it must be recognized that there is always a small fraction of the light that will fall on nearby cones, a fraction that is hard to measure. The stimulus uncertainties are mainly due to residual uncorrected aberrations, stimulus delivery errors, and uncontrolled scatter [37]. Nevertheless, some key advances have been made in our understanding of how visual perception is driven by the selective activation of single photoreceptors, and we will briefly review them here.

As a first proof-of-principle, psychometric functions of sensitivity to light increments have been recorded in normal subjects when cone-sized stimuli were targeted at single parafoveal cone centers or at the space between them (Fig. 17.3) [24]. It was found that thresholds could be measured reliably when such stimuli where delivered to the same cone. Moreover, when the light was intentionally targeted to the space between cones, thresholds rose substantially, directly demonstrating that the light capturing capabilities of the retina are spatially discrete. Modeling the light sensitivity of small groups of cones like that in Fig. 17.3a as Gaussian light apertures showed that some stimulus blur remained, ~0.06 D, but this is likely due to uncertainty about the exact focal position that yields the best AOSLO imaging. The basic result suggests that the spatial grain of perception is constrained by the exact arrangement of cones in any patch of retina and the exact placement of stimuli onto those waveguiding cones.

An unexplained observation in all confocal AOSLO images is that cone reflectivity varies to a large degree, from moment to moment and from day to day in the same subject. Could these differences in cone reflectivity be linked to differences in light capture? To test this idea, pairs of cones—one appearing bright, one dark were stimulated with randomly interleaved trials

**Fig. 17.3** Increment threshold sensitivity to cone-sized stimuli. (**a**) Perceptual responses can be measured when stimuli are targeted at one cone photoreceptor (top), or at the space between them (bottom). Contour lines represent the relative light distribution of spot stimuli after repeated trials (n = 22). (**b**) An increment sensitivity threshold can be measured for each location with an adaptive staircase,

shown here as five runs at the same location (horizontal line = mean threshold value from all runs). Note that more light was needed (higher threshold) to detect the stimulus when it was targeted at the space between cones. (**c**) The positional effect of stimulus placement on sensitivity was shown in four subjects at various locations, with a mean cone-to-gap ratio of 1.3. Data published in [24]

**Fig. 17.4** Cone sensitivity is unrelated to reflectivity. (**a**) Example cone pair tested as described in the text for threshold sensitivity, one normally reflective (beneath stimulus light intensity contours) and one relatively dark (dashed outline). (**b**) Among ten pairs in five subjects, dark cone thresholds were not significantly different from normally reflective cones, when compared to the

in five subjects with normal vision, again using an increment sensitivity design to see whether cone reflectance could predict thresholds [84]. In ten such directly tested cone pairs, no relationship between cone brightness and cone sensitivity was found (Fig. 17.4). Moreover, across normally reflective cones studied across several

mean of each pair. Data are shown individually for persistently dark and intermittently dark cones, as classified across many imaging sessions. This result indicates that cone reflectance in AOSLO images is not closely coupled to light absorbing efficacy. Horizontal bar is the mean across all pairs, vertical bars are ±1 SD. Data published in [84]

months and subjects (n = 284 pairs), there was a 3.6% higher threshold in darker cones, but this difference was not significant. This small effect was greatly outweighed by the variability of cone brightness in AOSLO images, and thus appears negligible for any sensitivity tests. The degree of cone reflectivity is known to be unrelated to cone spectral class, whether L, M, or S [36, 85]. Instead, it is likely that light interference in the waveguiding cone may play a role, possibly related to outer segment length changes as they occur during daily phagocytosis [50, 86].

That single photoreceptors can be stimulated to elicit percepts has been shown by several groups [24, 85, 87]. As we have seen in Sect. 17.2, the cellular network of the retina is, however, a complex circuit, where downstream neurons play key roles in how the visual signals are shaped before they reach conscious interpretation by the brain. Some of these postreceptoral mechanisms are now being studied by AO microstimulation. One example from the realm of spatial vision is visual information pooling. By recording increment sensitivity thresholds from retinal areas decreasing in size down to the single cone, it was shown that even when optical aberrations and eye motion have been minimized, summation areas in the fovea are as predicted by Ricco's law. This suggests that foveal spatial summation is limited by post-receptoral neural pooling with a fixed spatial extent, with parasol ganglion cells being a likely candidate for defining the summation area for the tested stimulus conditions [88]. Direct horizontal cell mediated interactions between cones were tested when cones of known class where stimulated with cone class-biasing adaptation background lights [89]. Here, a group of ~100 cones in each of two color normal subjects where first biophysically classed according to their opsin type to yield a map identifying the L, M, and S cones within a small retinal patch [90]. Next, increment thresholds with different background lights that bias sensitivity towards either L or M cones were measured from each classified photoreceptor. It was found that the composition of cell types in the immediate neighborhood of each targeted cell modulated its sensitivity: if more neighbors were of the opposite type, thresholds were higher. This could be explained by lateral inhibition driven by the background light, likely mediated by horizontal cells, that make negative feedback connections with L and M cones indiscriminately.

Another set of experiments took full advantage of AOSLO microstimulation to examine color sensitivity directly. Color percepts where gauged in subjects while a small monochromatic spot was placed persistently onto one cone, with a wavelength of 543 nm that would equally activate L and M cones. Two color normal observers were asked to name the color perceived. These stimuli, presented against a uniform white background, were targeted at cells which were functionally classed before, and thus the relationship between opsin type (cone class) and color percept could be revealed [85]. Color sensations generated by targeted cones were found to be stable over time, and not inextricably bound to cone type. In addition, both L and M cones more often elicited achromatic than chromatic percepts. These results were consistent with the idea that color and spatial information can flow along separate pathways, beginning from the first synapse in the retina. With a similar experimental design, it was shown that the exact spatial makeup of the cone mosaic and their types has an impact on hue categorization. M cones, more often than L cones, generated blue percepts in the presence of a short-wavelength background, and in one of the two tested subjects, this likelihood was elevated when more S-cones were in the immediate vicinity of the probed M-cone (Fig. 17.5), indicating a direct interaction between these two cone classes [91]. By carefully adjusting the wavelength composition and intensity of the background and intensity of the stimulus light, another study found that color naming and saturation ratings 1.5° from the fovea were highly correlated with cone-type, independent of stimulus spot intensity [92]. Such a result suggests that the visual system offers enough spatial resolution to assign a meaningful hue label to stimulus lights that selectively activate single—functionally colorblind—cones.

**Fig. 17.5** Color perception associated with single cone photoreceptors. (**a**) In one subject at three different locations (rows) where cones have been densitometrically classified (L, M, and S cone locations plotted in red, green and blue), cones where stimulated with a 543 nm conesized stimulus against either a white or blue uniform background (columns). Subjective hue scaling responses were recorded and are depicted as ring plots, giving the

percentage of responses within the possible hue categories (white, red, yellow, green, blue) at each cone, tested 10 times. (**b**) When a blue background was used, hue percepts shifted towards blue, an effect particularly visible in M cones (top), and less so in L cones (bottom). Error bars are SEM, dashed lines connect responses from one cell. Unpublished data provided by Brian Schmidt, Katharina Foote, Alexandra Boehm and Austin Roorda, UC Berkeley

#### **17.4 Cell-Resolved Vision Testing in Clinical Ophthalmology**

With a continuously growing number of studies using AO to image the diseased retina, the ability to see individual photoreceptors in a patient's eye has become a valuable tool in the ophthalmological clinic [22, 23]. With AO imaging, it is apparent that many retinal diseases exhibit structural changes of the photoreceptor mosaic involving cell loss or disruption, often only discernible at the microscopic level. That cell-resolved imaging may be a tool for early detection of latent disease onset holds considerable promise for vision health, and future therapeutic approaches may be identified efficiently with photoreceptor based imaging biomarkers [93, 94].

Translating the ability to study retinal function with AO microstimulation into a clinical setting ought to yield new insights but comes with its own set of complications that need consideration. For patient populations, the primary challenges are poorer image quality due to aged or opaque optical media, increased eye movements due to a disturbance of fixational capabilities, and larger constraints on functional testing time due to limits in a patient's ability to remain in a study chair. Due to its mode of operation, AO wavefront correction depends on reflected light from the retina. With cloudy optical media, e.g. in the presence of a cataract, it is currently not possible to create a correctable wavefront signal, and thus AO offer no advantage over conventional imaging techniques. Because of the much smaller field of view, retinal imaging with an AO system is also much more negatively affected by eye movements, which in turn makes retinally stabilized stimulus delivery difficult. In some forms of ocular disease, such as retinal dystrophies, normal fixational eye movements are accompanied by large nystagmus-type eye movements [95]. While smaller fixational eye movements can be compensated for by hardware or software based image stabilization tools [76, 96], larger motion amplitudes can be counteracted by active beam steering [96, 97]. An intense pathological nystagmus, however, can considerably prolong the imaging process, degrade image quality, or may render correction impossible completely [43, 56, 98]. Essentially any disease that affects fovealmediated fixation, such as age-related macular degeneration, is expected to make targeted stimulation challenging, although AO imaging without stimulation could still be performed.

Despite these additional complications, AO microstimulation in retinal disease is an active and growing field of research that has already extended our knowledge about normal and abnormal photoreceptor structure and function. Specific operational features have been developed to help make AO microperimetry more useful for patient testing [28]. In addition to realtime image stabilization, cone test locations can be digitally stored for reuse, making it easier to overcome interruptions during testing and to allow follow-up sessions to be initiated quickly. Psychophysical measurements of cone sensitivities can be time consuming and thus any testing strategy has to be maximized for efficiency. Adaptive staircase procedures such as QUEST have been demonstrated to converge to threshold after about 15-20 trials when testing single cones [24]. The clinically traditional 4–2 dB threshold strategy, as employed in automated perimetry testing for instance, can also be used to quickly converge towards perceptual thresholds, albeit with coarser resolution [99]. The first clinical visual function testing with AO microperimetry was a case of Idiopathic Macular Telangiectasia Type II, a rare early onset disease of the outer retina (Fig. 17.6). In these eyes, in retinal areas where photoreceptors reflected weakly or were not apparent at all, small spot visual sensitivity was found to be normal [100], suggesting that

**Fig. 17.6** Clinical AOSLO microperimetry. In a patient with Macular Telangiectasia Type II, several retinal locations close to the preferred retinal locus of fixation (asterisk) were tested with AOSLO microstimulation (test locations and stimulus size shown with square markers). Detection threshold for light increments were normal or close to normal in all locations (marker color), despite a markedly disrupted retinal appearance. Areas outside the solid line and inside the dashed-dotted line appear normal with a regular cone mosaic. Areas inside the solid line are hyporeflective, with few discernible cones. Inside the dashed line, the retina is more transparent, showing a mosaic of underlying retinal pigment epithelial cells. Data published in [100]

cones not oriented along the axis of the imaging beam can still retain functionality. This behavior is reminiscent of the finding that dark cones in healthy retinae produce sensitivity thresholds that are indistinguishable from cones with normal reflectivity [84], and that some cones that are dysflective in AOSLO and OCT images convey normal visual sensitivity in a case of acute bilateral foveolitis [101]. Taken together, these first studies of clinical conditions demonstrate that the relationship between cone images and cellular function is not straightforward, and that structural information alone is insufficient to characterize the functional integrity of the retina, especially in cases of retinal disease. Psychophysical or biophysical cell-targeted function testing is likely to become an important adjunct to imaging in order to arrive at a clearer picture of normal as well as disease status in the human retina [24, 102, 103].

#### **References**


measured during acute gas breathing provocations. Sci Rep. 2017;7(1):2113. https://doi.org/10.1038/ s41598-017-02344-5.


Transl Vis Sci Technol. 2016;5(1):10. https://doi. org/10.1167/tvst.5.1.10.


2012;3(9):2066–77. https://doi.org/10.1364/ BOE.3.002066.


hue perception. BioRxiv. 2018a:1–20. https://doi. org/10.1101/317750.


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

**Compact Adaptive Optics Scanning Laser Ophthalmoscope with Phase Plates**

**18**

Gopal Swamy Jayabalan, Ralf Kessler, Jörg Fischer, and Josef F. Bille

#### **18.1 Introduction**

One of the major concerns in ophthalmology is to preserve vision since even a minor loss in visual acuity can severely impact the quality of life. Since the invention of the first ophthalmoscope in 1851 by Hermann von Helmholtz, fundus imaging has become an essential part of the retinal examination for diagnostic and monitoring purposes. Lately, many advanced retinal imaging techniques have emerged to provide new insights into the pathogenesis of retinal diseases. A broad range of clinical instruments like slit lamp, fundus camera, direct and indirect handheld ophthalmoscopes, confocal scanning laser ophthalmoscope (cSLO), optic nerve head analyzer, ultrasonography and optical coherence tomography (OCT) has been widely used in clinics for early diagnosis and monitoring retinal disease progression. However, these instruments do not help to identify the early stages of retinal diseases since the pathogenesis begins at a cel-

University of Heidelberg, Heidelberg, Germany

R. Kessler · J. Fischer Heidelberg Engineering GmbH, Heidelberg, Germany

lular level, and visualization of changes at a microscopic level is required [1]. Therefore, a major effort in developing new retinal imaging techniques at a cellular level is essential.

Retinal imaging at a cellular level is challenging since the instrument should have a lateral resolution approaching the cell size to resolve neighboring cells within the same focal plane, combined with penetration through absorbers and scatter in the eye, optical sectioning, speed, sensitivity, and contrast generation. Likewise, to visualize individual cells the camera should have sufficient resolution and contrast. With handheld ophthalmoscopes and fundus cameras, the gross anatomical features of the retina over large areas can be observed, but these instruments do not provide clinically relevant information at a cellular level. The cSLO and OCT have a better effective resolution than the fundus camera and have been widely implemented for various clinical applications including detection of the biomarkers of diabetic retinopathy, AMD, and glaucoma. Regardless of these technological advancements, imaging of the retina at a cellular level was limited by the optical aberrations which limit the lateral resolution. The human eye is not a perfect optical system and ocular aberrations impair the vision as well as the retinal image quality. Although the defocus and astigmatism can be corrected by spectacles, the idea of correcting the higher-order aberrations

G. S. Jayabalan (\*) · J. F. Bille Heidelberg Engineering GmbH, Heidelberg, Germany

in the human eye by customized contact lenses was proposed in the early 1960s by Smirnov. Later in 1997, Liang et al. successfully corrected the higher-order aberrations providing normal eyes with supernormal optical quality allowing imaging of the retina at a microscopic scale [2]. Since then, adaptive optics has been implemented by numerous research groups for high-resolution imaging of the retina. The aberrations can be compensated by integrating the adaptive optics which can also enable visualization of cone photoreceptors, rod photoreceptors, and leucocytes. In vivo retinal imaging of these structures helps to non-invasively monitor the retinal functions, the progression of retinal diseases and efficacy of therapies at a microscopic spatial scale [3].

Adaptive optics has been combined with techniques including scanning laser ophthalmoscopy, funduscopy, and OCT. In general, the adaptive optics systems are composed of two main components: a wavefront sensor and a deformable mirror. The wavefront sensor measures the aberrations induced by the optical system and eye; a deformable mirror corrects the aberrations by physically changing the surface shape to match the measured aberration. Thus, adaptive optics enables an aberration-free or a diffraction-limited system [4]. However, the cost and complexity of adaptive optics ophthalmoscopes currently obstruct its clinical use. To overcome this factor, a compact non-adaptive optics ophthalmoscope at low cost has been developed by researchers to visualize cone photoreceptors and nerve fiber bundles. Yet, the foveal rod and cone photoreceptors can be visualized only with adaptive optics ophthalmoscopes.

Even though the image quality and resolution have improved with non-adaptive optics ophthalmoscopes, the cone photoreceptors can be visualized only in healthy eyes with minimum ocular aberrations and smaller pupil size [5]. The ocular aberrations are lesser in smaller pupil size and the aberrations increase with a rise of the pupil size. For small pupil sizes (2–3 mm), diffraction dominates the Point spread function (PSF) and hence the quality of the retinal imaging, while for large pupils the major source of degradation in the PSF and retinal image quality are due to the aberrations. Also, the efficacy of an ophthalmic scanning laser imaging systems is dependent on the effective spot size of a laser system. Therefore, it is vital that the system's laser beam converge to diffraction-limited focal spot. The eye itself influences the imaging beam by significant optical and phase aberrations, and these aberrations have to be compensated, either removed or minimized, for diffraction-limited imaging. In principle, the diffraction limit can be achieved by establishing a flat wavefront of the imaging beam. The flat wavefront can be achieved by measuring the aberrations and producing a phase plate on the basis of the resultant measurements to compensate these aberrations. The wavefront detection technology has been widely used for refractive surgeries and allows surgeons to customize the procedure for each eye [6]. Also, the same technique is combined with the aberration correction unit in the adaptive optics system to improve the retinal image quality, see Chap. 16 for adaptive optics techniques. Likewise, it is also logical to use a "*phase plate*" to compensate the wavefront aberrations to facilitate high-resolution and contrast of images from aberrated eyes. This will improve the overall efficacy of resolution and contrast of scanning laser ophthalmoscopes [7]. The improvement in contrast and resolution of retinal images allows for a direct observation of retinal microstructures and analysis of their integrity and pathological abnormalities. Thus, a compact adaptive optics can be achieved with minimum cost and less complexity and can be easily incorporated in the prevailing cSLOs in clinics.

In this chapter, the design and development of a phase plate for cSLO is described to compensate the higher-order aberrations of the eye and to improve the contrast of retinal imaging.

#### **18.2 Wavefront Aberrations**

A typical optical imaging system consists of an object plane, an optical system and an image plane as shown in Fig. 18.1. A wavefront is a surface over which an optical disturbance has a constant phase, and is always perpendicular to the rays. The wavefront aberration is the opti-

**Fig. 18.2** Wavefront aberrations. Ideal spherical wavefront (left image), ideal planar wavefront (middle image), aberrated wavefront (right image)

cal path length difference along rays between the actual wavefront and the ideal wavefront at the exit pupil. The shape of the wavefront can be either spherical or planar in an aberration-free optical system (Fig. 18.2). In ophthalmology, the wavefront aberrations are most commonly expressed with Zernike polynomials and this is described in Sect. 18.2.2.1.

#### **18.2.1 Optical Aberrations of Human Eye**

The eye as an optical system focuses the entering light rays on the retina. The cornea is a transparent structure in front of the eye that helps to focus the incoming light. The image-forming light is further focused by the lens on the retina, and these optical signals are converted to neural signals at the retina and enable individuals to see the world. Any imperfections in focusing the light on the retina will cause the light rays to deviate, and these deviations are referred to as optical or wavefront aberrations. These aberrations will lead to blurred images and decrease in visual performance.

Wavefront aberrations are of two types:


Lower-order aberrations are myopia, hyperopia, and astigmatism which can be corrected with glasses, contact lenses or refractive surgery. These aberrations make up about 85% of aberrations in the eye. The other not visually significant lower-order aberrations are known as first-order aberrations, such as prisms and zeroorder aberrations (piston).

There are numerous higher-order aberrations, of which coma, trefoil, and spherical aberration are of clinical interests in ophthalmology. Coma is the distortion in the image formation and occurs when the light rays entering the optical systems are not parallel to the optic axis. Spherical aberration is an imaging imperfection that occurs when the light rays from the edges of a lens or mirror focus at a shorter distance than the light rays from the center. Higher-order aberrations are more complex than the lowerorder aberrations, and these aberrations produce vision errors such as difficulty when seeing at night, glare, halos, blurring, starburst patterns or double vision. With the latest technological advancement, these aberrations can be measured and diagnosed [8].

#### **18.2.2 Quantitative Expression of Ocular Aberrations**

#### **18.2.2.1 Zernike Polynomials**

The most common method to classify the shapes of aberration maps is to consider each map as the sum of fundamental shapes or basic functions. In ophthalmology, the wavefront aberrations are expressed using a series of Zernike polynomials (Table 18.1). The benefit of expressing the aberrations in Zernike's polynomials is that the polynomials are independent to each other, and the coefficient gives the wavefront errors.

A Zernike polynomial is a complete set of functions orthogonal over the unit circle (Fig. 18.3) and parameterized by a dimensionless radial parameter ρ and a dimensionless meridional parameter θ. It is designated by a non-negative radial integer index *n* and a signed meridional index *m*. Each Zernike polynomial is the product of three terms, a normalization term, radial term and a meridional term. It is given by the following equation.

$$\begin{aligned} Z\_{\boldsymbol{\pi}}^{\boldsymbol{m}}\left(\boldsymbol{\rho},\boldsymbol{\theta}\right) &= N\_{\boldsymbol{\pi}}^{\boldsymbol{m}} R\_{\boldsymbol{\pi}}^{\boldsymbol{\|\boldsymbol{m}\|}}\left(\boldsymbol{\rho}\right) \cos\left(\boldsymbol{m}\boldsymbol{\theta}\right) \\ &\quad \text{for } \boldsymbol{\text{m}} \ge \boldsymbol{0}, \boldsymbol{0} \le \boldsymbol{\rho} \le \boldsymbol{1}, \boldsymbol{0} \le \boldsymbol{\theta} \le 2\pi \end{aligned}$$

$$\begin{aligned} Z\_{\boldsymbol{\mu}}^{-\boldsymbol{m}}\left(\boldsymbol{\rho}, \boldsymbol{\theta}\right) &= -N\_{\boldsymbol{\kappa}}^{\boldsymbol{m}} R\_{\boldsymbol{\kappa}}^{|\boldsymbol{m}|}\left(\boldsymbol{\rho}\right) \sin\left(\boldsymbol{m}\boldsymbol{\theta}\right) \\ &\quad \text{for } \boldsymbol{m} < 0, \boldsymbol{0} \le \boldsymbol{\rho} \le 1, \boldsymbol{0} \le \boldsymbol{\theta} \le 2\pi \end{aligned}$$

for a given *n: m* can only take on values *-n*, *−n* + 2, *−n* + 4,….n.

*Nn <sup>m</sup>* is the normalization factor and is given by

$$N\_{\kappa}^{m} = \sqrt{\frac{2\left(n+1\right)}{1+\delta\_{m0}}} \qquad \begin{aligned} \delta\_{m0} &= 1 \text{ for } m = 0, \\ \delta\_{m0} &= 0 \text{ for } m \neq 0 \end{aligned}$$

$$R\_{\ast}^{\|\ast\|}\left(\rho\right) \text{ is the radial polynomial and is given by}$$

$$\begin{aligned} R\_s^{|n|}\left(\boldsymbol{\rho}\right) &= \sum\_{s=0}^{\left(n-\left|m\right|\right)^{s}2} \\ &\quad \frac{\left(-1\right)^{s}\left(n-s\right)!}{s!\left[0.5\left(n+\left|m\right|\right)-s\right]!\left[0.5\left(n-\left|m\right|\right)-s\right]!} \boldsymbol{\rho}^{n-2s} \end{aligned}$$

#### **18.2.2.2 Standard Wavefront Error Description**

The wavefront error of an eye is the optical path length, i.e. the product of the geometric length (physical distance the light travels) and the refractive index of the medium, between a plane wavefront in the eye's entrance pupil and the wavefront of light exiting the eye from a point source on the retina. It is specified as a function of (x, y) or (ρ, θ) coordinates of the entrance pupil. Wavefront errors are measured in an axial direction from the pupil plane towards the wavefront. By convention, the wavefront error is set to zero at the pupil center by subtracting the central value from values at all other pupil locations.

The wavefront described using the Zernike polynomial functions as shown below.

$$W\left(\rho,\theta\right) = \sum\_{n,m} c\_n^m Z\_n^m \left(\rho,\theta\right).$$

Where c denotes the Zernike amplitudes or coefficients and Z the polynomials.


**Table 18.1** Zernike polynomials up to fourth order

Adapted by permission from Optical Laboratories Association: [9]

**Fig. 18.3** Ophthalmic coordinate system

#### **18.2.2.3 Root Mean Square Wavefront Error**

The quantitative comparisons between different eyes and conditions are expressed as root mean square (RMS). The RMS wavefront error for the human eye is computed as the square root of the variance of wavefront error function. Piston and tilt are usually excluded from the calculation since they correspond to lateral displacements of the image rather than image degradation. RMS error is defined by the following equation.

$$RMS = \sqrt{\frac{\iint\_{\text{pay}} \left[ W \left( x, \mathcal{V} \right) - \overline{W} \right]^2 dx dy}{A}}$$

Where A is the area of the pupil and *W* is the mean wavefront optical path difference.

If the wavefront function is expressed in terms of normalized Zernike coefficients, the RMS value is equal to the square root of the sum of the squares of the coefficients with radial indices *n* ≥ 2.

$$RMS = \sqrt{\sum\_{n \ge 2, all \mid n} \left(c\_n^{\prime n}\right)^2}$$

#### **18.2.2.4 Point Spread Function**

The aberration of the human eye negatively impacts the retinal image quality. To characterize the effects of aberrations, the Fourier optics has been introduced in ophthalmology. One of those techniques is the PSF, which defines the propagation of electromagnetic radiation or other imaging waves from a point source or point object.

In incoherent imaging systems such as fluorescent microscopes, telescopes or optical microscopes, the image formation process is linear and described by linear system theory. The process is usually formulated by a convolution equation.

#### *Image* = Ä *PSF Object*

Image: image generated by the optical system, PSF: point spread function of the optical system,

Object: object,

⊗: convolution operator.

Figure 18.4 shows the image formation process of the optical system. The final image created by the system is the convolution of the object which is going to be imaged with the point spread function of the system. Due to the imperfection of the optical system, the image is normally blurred.

The optical properties of the eye can be characterized by the wavefront error function, which can be described by a series of Zernike polynomials. The image of a point object formed by the optical system is the point spread function or impulse response. It is defined as

$$PSF\left(\chi,\wp\right) = \frac{1}{\lambda^2 d^2 A\_p} \left\| FT \left\{ \left. p\left(\chi,\wp\right) e^{-i \frac{2\pi}{\lambda} W\left(\chi,\wp\right)} \right\} \right\|^2$$

**Fig. 18.4** Image formation process. The final image generated by the optical system can be described as a mathematical convolution of the object with the PSF of the optical system

FT: Fourier transform operator.

D: distance from the exit pupil to image.

Ap: area of the exit pupil.

P (x, y): defines the shape, size and the transmission of the exit pupil.

*e* -*i W* ( ) *x y* <sup>2</sup>p l , accounts for phase deviations of wavefront from a reference sphere.

W (x, y): wavefront aberrations function at the exit pupil.

It is also possible to calculate the modulation transfer function (MTF) of the human eye, which is the modulus of the optical transfer function (OTF). MTF is normally used to characterize the resolution and performance of an imaging system and is also known as spatial frequency response. The mathematical formulas are given by the following equation:

$$OTF\left(s\_{\times}, s\_{\times}\right) = \frac{FT\left\{PSF\right\}}{FT\left\{PSF\right\}|s\_{\times} = 0, s\_{\times} = 0\right)$$

$$MTF\left(s\_{\times}, s\_{\times}\right) = \left\|OTF\left(s\_{\times}, s\_{\times}\right)\right\|$$

(*sx*,*sy*) are in units of cycles per radian.

#### **18.2.2.5 Strehl Ratio**

The Strehl ratio is a measure of the effect of aberrations in reducing the maximum or peak value of the PSF. The definition of the Strehl ratio is the ratio of the observed peak intensity at the detection plane of a telescope or other imaging systems from a point source compared to the theoretical maximum peak intensity of a perfect imaging system working at the diffraction limit. The Strehl calculation is based on a complex mathematics, and a simple empirical expression gives a very close approximation of the Strehl ratio in terms of the RMS wavefront error:

$$Strehl\ ratio \cong e^{-\left(2\pi\sigma\right)^2}$$

where *σ* is the root-mean-square deviation of the wavefront measured in wavelengths.

#### **18.2.3 Assessment of Ocular Aberrations**

Higher-order aberrations can be measured using a wavefront sensor, and Hartmann-Shack sensors have been widely used in ophthalmology to measure the monochromatic aberrations of the eye. The efficacy of this technique was first demonstrated by Junzhong Liang during his doctoral thesis at the University of Heidelberg under the supervision of Prof. Dr. Josef Bille [10]. The aberrometer measures the distortion of a light wave passing through the optics of the eye. These sensors do not measure the light scatter, chromatic aberrations or diffraction phenomena, and their effects on vision have to be assessed by other technologies. The principle of wavefront aberrometer is explained in detail in Chap. 16.

#### **18.3 Experimental Setup**

#### **18.3.1 Confocal Scanning Laser Ophthalmoscope**

For the preliminary study, the cSLO (SPECTRALIS, Heidelberg Engineering GmbH, Heidelberg, Germany) with the highmagnification objective lens was used. cSLO is a non-invasive imaging technique that scans the retina with a laser beam enabling high-resolution retinal imaging. The schematic of cSLO is shown in Fig. 18.5. The emitted fluorescent light (green line Fig. 18.5) from the retina can be captured using a detector and the out of focus (red line Fig. 18.5) can be eliminated by a confocal pinhole in front of the detector. With SPECTRALIS, the depth of focus can be adjusted manually and deeper tissue structures can be visualized. Threedimensional images can be generated with the focal plane adjustment as well (see Chap. 2 for more details).

The SPECTRALIS is an indispensable instrument in the field of ophthalmology combining cSLO and high-resolution OCT. It has been widely used to diagnose various retinal diseases. SPECTRALIS is an expandable diagnostic imaging platform and adds value to imaging as it can be used for conventional fundus imaging to ultra-widefield retinal imaging by simply changing the objective lenses. High-contrast fundus images can be acquired with a 30° field of view (FOV) using a standard objective lens (Fig. 18.7, left image). The widefield images of the fundus can be achieved with an additional objective lens allowing for a 55° FOV capturing the macula, the optic nerve head and areas beyond the vessels arcades in a single image (Fig. 18.6, left image). The widefield imaging facilitates a comprehensive diagnostics beyond the conventional fundus imaging. Added, the ultra-widefield objective lens can capture an extremely wide FOV of 102° with evenly illuminated, high-contrast images even in the periphery (Fig. 18.6, right image). Thus, with a single imaging platform images of different FOV can be acquired using SPECTRALIS simply by changing the objective lenses.

Similarly, the Heidelberg Engineering has developed a new lens for high-magnification retinal imaging with 8° and 4° FOV. The highmagnification objective lens is an add-on objective lens that replaces the standard objective lens to acquire a smaller FOV for higher magnification (Figs. 18.7, 18.8, 18.9, and 18.10) The pixel densities are improved in both X and Y directions with the high-magnification objective lens compared to the standard objective lens (30° FOV). The digital image readout for the high-resolution mode with high-magnification objective lens for 8° and 4° FOV is 1536 × 1536 and 768 × 768 pixels. Currently, the 4° FOV is possible only with the in-house test device for preliminary studies, and if the results are adequate and guarantee a future-oriented solution, the firmware will be updated with 4° FOV in a later release of SPECTRALIS with a high-

**Fig. 18.6** SPECTRALIS widefield (left image) and ultra-widefield (right image) retinal image of the right eye

**Fig. 18.7** SPECTRALIS standard infrared reflectance image (30° FOV—left image) of the right eye. Highmagnification retinal images with 8° (red box) and 4°

(yellow box) FOV. Zoom-in images of 8° (red dotted line box) and 4° (yellow dotted line box) FOV showing cone photoreceptors

magnification objective. For laser safety, an additional blocking filter is integrated into the objective mount. This filter strongly reduces the blue laser (486 nm). The infrared laser (810 nm), indocyanine green laser (786 nm) and the green laser (518 nm) can be used for examination with a high-magnification objective lens in the reflectance mode and in the angiography mode. The cone photoreceptors can be visualized with the high-magnification objective lens in subjects with less ocular aberrations even without the use of adaptive optics.

**Fig. 18.8** SPECTRALIS image of the fovea (4-point star) with 8° FOV (left image) showing cone photoreceptors at retinal eccentricities. Zoom-in images show-

ing cones photoreceptors at the retinal eccentricities (red boxes) and no photoreceptors at the fovea (yellow boxes)

**Fig. 18.9** SPECTRALIS with high-magnification objective lens showing nerve fiber bundles (left image) and lamina cribrosa (right image)

#### **18.3.2 Measurement of Ocular Aberrations**

Many commercially available aberrometers can be used to measure the ocular aberrations of an eye to produce a customized phase plate for each individual. For this study, we used a commercially available aberrometer iDesign™ from Abbott Medical Optics which combines aberrometry and corneal topography measurements (Fig. 18.11). The wavefront sensor component in this instrument is a Hartmann-Shack wavefront sensing type. With the measured ocular aberrations, a customized phase plate for each individual was produced. The ocular aberrations measurements were performed at the eye clinic in

**Fig. 18.10** Green light retinal image with standard (30° FOV—left image) and high-magnification (8° FOV—right image) objective lenses

**Fig. 18.11** Ocular aberrations measurement of right eye from a healthy volunteer with a commercially available aberrometer (iDesign)

Heidelberg (Augenpraxisklinik Heidelberg). The pupil was dilated by the physicians for aberration measurements.

#### **18.3.3 Zemax Simulation**

The Zemax simulations were carried out to characterize the effect of aberration compensation using phase plates. One example is demonstrated in this section to evaluate the efficacy of phase plate in correcting the ocular aberrations. The Zernike coefficients from the iDesign measurement (Fig. 18.11, table) were used to simulate the wavefront map in Zemax. The piston, tilt and defocus are avoided for the simulation study since the piston and tilt correspond to lateral displacements of the image rather than the image degradation. Also, the defocus can be adjusted manually with the SPECTRALIS. The wavefront map simulated in Zemax is shown in Fig. 18.12 and is compared to the aberrometer measurement. The RMS error value and the wavefront map from Zemax simulation match with the aberrometer measurement.

Figure 18.13 shows the corresponding PSF and MTF of the eye before and after aberration compensation. The PSF and MTF clearly show that the aberrations severely impact the retinal imaging quality, and there will be a great improvement if these aberrations are compensated.

#### **18.3.4 Phase Plate**

A phase plate is a pre-compensation unit to correct higher-order aberrations in the human eye for retinal imaging. Diffraction-limited imaging can be achieved with phase plates by compensating the aberrations of the optical system and the human eye, providing a significant improvement in contrast of retinal images. With contrast improvement, more structural and morphological information can be retrieved from the retina. The phase plates can be manufactured using the Zernike coefficients of the measured ocular aberrations (Fig. 18.11, table), as the phase plate is the inverse wavefront of the measured aberrations. This inverse wavefront will make the aberrated wavefront flat and this will improve the contrast of a scanning laser ophthalmoscope. Figure 18.14 is an example showing how the phase plate can compensate and results in a flat wavefront.

#### **18.3.5 Mask Structured Ion Exchange Technique**

The mask structured ion exchange (MSI) technique is used to produce the phase plates. The mask structured silver-sodium ion exchange in glass (MSI) is a powerful tool for the realization of high precision refractive micro-optical components. A planar glass substrate is covered with a titanium layer on both sides. One side

**Fig. 18.12** Zemax simulation of wavefront map with higher-order aberrations without defocus and astigmatism (left image). Wavefront map of higher-order aberrations from aberrometer (right image). RMS = 0.48 μm

**Fig. 18.13** PSF and MTF simulation in Zemax. PSF (top-left) and MTF (top-right) of aberrated eye. PSF (bottom-left) and MTF (bottom-right) after compensating the ocular aberrations

**Fig. 18.14** Wavefront compensation with phase plate. Wavefront map of the aberrated eye (left image), wavefront map of the phase plate (middle image) and the resulting wavefront map (right image)

is structured by a photolithographic process to attain well-refined apertures for the ion migration into the glass material and the other side retains unstructured. The diffusion process takes place in a melt of AgNO3. The sodium ions are exchanged by silver ions from the melt (Fig. 18.15, left image). For the field assisted process (Fig. 18.15, right image) an additional electric field is applied between the silver salt melt (anode) and the bottom of the glass (cathode), which affects a current of silver ions into the glass [11].

#### **18.3.6 Phase Plate Specifications**

The customized phase plates for each individual were manufactured by Smart Microoptical Solutions, Germany. The size of the aberration compensation zone or active zone in each plate is 8 mm in diameter, and the phase plates are 2 mm thick (Fig. 18.16, left image). The phase plate can be easily fitted into SPECTRALIS with a phase plate holder (Fig. 18.16, right image). The length of the extension holder is about 55 mm, and the

**Fig. 18.15** Mask structured ion exchange technique. A thermal diffusion: exchange of *Na+* by *Ag+* ions (left image), field assisted process: *Ag+* ion current (right image) (adapted by permission from SPIE: [11])

phase plates were positioned at the scan pupil of the scanning laser to compensate the aberrations. As a result, with the inclusion of phase plates in combination with a high-magnification objective lens, the aberrations can be compensated for high-contrast retinal imaging.

#### **18.3.7 Retinal Imaging with Phase Plates—Experimental Results**

In vivo measurements from healthy volunteers were performed in dilated and undilated eyes since the pupil size affects the image quality. It is expected that phase plates will increase the image quality in larger pupil size. However, with a large pupil size, the aberrations are higher compared to smaller pupil size and these aberrations have to be eliminated and diffraction should dominate to improve the optical quality. For pupil size smaller than 3 mm, the ocular aberrations of the eye in healthy subjects are minimal and therefore, the retinal imaging of photoreceptors/microvasculatures with phase plates might not be immensely improved compared to the retinal imaging without phase plates. In this study, the efficacy of the phase plate on different pupil size has been evaluated. For retinal imaging in the dilated pupil, the subject's eyes were dilated with the pupil dilator at eye clinics in Heidelberg, and no other drugs were administered to prevent accommodation.

With SPECTRALIS, the NIR (810 nm) laser can be used for photoreceptor imaging and the green laser (518 nm) can be used for microvasculature imaging. Also, different layers of the retina can be visualized with the high-magnification objective lens by adjusting the focus. However, in this study, we focused on cone photoreceptors to determine the efficacy of phase plates in retinal imaging with NIR.

Images visualizing cone photoreceptors in healthy volunteers were acquired with 8° and 4° FOV. A significant improvement in image quality with phase plates compared to images without phase plates was noticed in both dilated (Fig. 18.17) and undilated eyes (Fig. 18.18). In Fig. 18.17, top left image visualizes the retina in a dilated eye with 4° FOV with cone photoreceptors clearly visible. The contrast improvement

photograph of

**Fig. 18.17** High-magnification retinal images in a dilated eye without (top-left) and with (top-right) phase plate. Comparison of retinal image quality without (solid line boxes) and with (dotted line boxes) phase plate

was also noticed with the implementation of phase plates (Fig. 18.17, right image). Likewise, the improvement in contrast and visualization of cone photoreceptors was noticed in the undilated pupil as well (Fig. 18.18). The comparison of retinal image quality without phase plate (Figs. 18.17 and 18.18, solid line boxes) and with phase plate (Figs. 18.17 and 18.18, dotted line boxes) clearly supports that the improvement in contrast from the cones photoreceptors with the implementation of the phase plate. This proves that the phase plates will facilitate examining the individuals suffering from higher-order aberrations.

The cone photoreceptors with the largest FOV (8°) was resolved in this study; however, with large FOV degradation in the corners of the image was noticed. Also, the foveal cone photoreceptors could not be optically resolved with larger FOV. This could be due to the isoplanatic angle of the eye, which is in the order of 3–4°, and the sampling density might not be sufficient to resolve small structures. The 4° FOV in our test device was only a digital zoom, and this does not provide more physical information or better resolution, and the retinal structures cannot be better resolved. But the 4° FOV could be used to better position the focal plane at the retina for photoreceptor imaging and to eliminate the image degradation in the corners.

A widefield photoreceptor imaging can be achieved by montaging. The retinal images can be acquired from nine different locations using the fixation target and then can be stitched together for a wide FOV. With the stitched images the FOV of 15° can be achieved with the SPECTRALIS. However, due to the size of photoreceptors, the visibility of small details cannot be fully appreciated with stitching (Fig. 18.19) and manual zoom in of the images is required.

**Fig. 18.18** High-magnification retinal images of an undilated eye without (top-left) and with (top-right) phase plate. Comparison of retinal image quality without (solid line boxes) and with (dotted line boxes) phase plate

**Fig. 18.19** 8° FOV stitched images in 30° FOV retinal image and zoom-in images of 8° FOV (red and yellow box)

**Fig. 18.20** Retinal image (left image), background image (middle image), and the retinal image after background subtraction (right image)

Currently, the retinal images acquired with the high-magnification objective lens show a bright white spot in the center of the image. The bright white spot is a result of the reflection from the lens itself and the reflection also has an effect on the pupil size of the acquired eye. The laser beam diameter of the high-magnification objective lens is 6 mm and therefore for smaller pupil sizes the reflection from the lens was visible on the images. The reflection was not noticed on the retinal image from the dilated eyes (see Figs. 18.17 and 18.18). These reflections on the acquired images can be eliminated by subtracting the acquired retinal image from the background image (Fig. 18.20).

#### **18.4 Conclusion and Discussion**

The study confirmed that phase plates can compensate the ocular aberrations and improve the retinal image quality. The lower-order aberrations such as defocus and astigmatism can be compensated using sphero-cylindrical lenses. These lenses are called astigmatic lenses and included with the high-magnification objective lens. In this study, the phase plates were tested in healthy volunteers compensating the lower and higher-order aberrations together. With the help of wavefront technology, the optical aberrations in the human eye can be precisely measured and help in producing the phase plates.

Compared to adaptive optics systems, the phase plates are much easier to adapt to existing clinical ophthalmoscopes and are less expensive. Significant improvements in retinal image quality can be achieved by introducing this technology to the cSLO. Improved diagnosis and detection of retinal diseases at a cellular level will be clinically relevant and will add value for patients as well as for research.

Although improvement in retinal image quality is noticed, this study is in a preliminary stage. Some improvement is further needed to implement the phase plates in clinical routine. Currently, the customized phase plates have been tested only on healthy volunteers with minimum aberrations, and as a next step, the phase plates need to be tested on individuals suffering from higherorder aberrations. In our experimental setup, the wavefront sensor is not included and commercial aberrometer has been used to measure the ocular aberrations to produce phase plates. It will be a great improvement if a Hartmann-Shack sensor has been employed to the system to measure the aberrations. This will allow us to measure both the system and ocular aberrations together and no additional device needed for ocular aberration measurements. Also, having a wavefront sensor in our system could improve the optimum degree of compensation because the positioning of phase plates is crucial for retinal imaging. Proper positioning of phase plates is a key factor to achieve the optimum degree of compensation, as some misalignments could reduce the performance of the phase plate. Therefore, implementing a wavefront sensor could guide the photographers to position the patient's eye for effective optical compensation.

With respect to cSLO, the phase plates could be implemented in other retinal imaging technologies as well. The phase plates with OCT could improve the lateral resolution compensating the aberrations, and also could be used for two-photon ophthalmoscopy for diffraction-limited imaging [7]. The phase plates can also be used for selective retinal therapy (SRT) since the laser can be focused precisely on the retina by compensating the aberrations of the human eye. In future, combining the SRT with two-photon ophthalmoscopy and phase plates can advance the SRT treatment, since with the two-photon principle the laser can be focused more precisely on the retinal pigment epithelium (RPE) without radiating other retinal tissue layers. Also, the RPE can be visualized with two-photon retinal imaging, and these images could be used as a reference image for the SRT procedure. In conclusion, the phase plates show significant improvements in retinal image quality with cSLO and are promising for improved diagnosis and detection of retinal diseases for subjects affected by higher-order optical aberrations.

#### **References**

1. Marcos S, Werner J, Burns S, Merigan W, Artal P, Atchison D, Hampson K, Legras R, Lundstrom L, Yoon G, Carroll J, Choi S, Doble N, Dubis A, Dubra A, Elsner A, Jonnal R, Miller D, Paques M, Smithson H, Young L, Zhang Y, Campbell M, Hunter J, Metha A, Palczewska G, Schallek J, Sincich L. Vision science and adaptive optics, the state of the field. Vis Res. 2017;132:3–33.


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Epilogue**

### Tilman Otto

When Professor Josef Bille shared his concept for a book, dedicated to Dr. Gerhard Zinser, everyone who had the privilege to work with him over the years was excited, especially me. It led to an interdisciplinary project in which outstanding ophthalmologists, scientists and colleagues could contribute their expertise and pay tribute to a great person. I am very grateful to the many authors who have invested significant effort and dedication to make this happen. It was really worthwhile! The book provides a clear and understandable overview of the broad field of high-resolution imaging in microscopy and ophthalmology.

I started working with Gerhard at Heidelberg Engineering in 1995, when the Heidelberg Retina Tomograph (HRT) had been on the market for several years. The HRT was based on the Laser Tomographic Scanner (LTS), which Gerhard had developed in the late 80s at Heidelberg Instruments. The success of Heidelberg Engineering was based not only on the entrepreneurial spirit of the two founders Gerhard Zinser and Christoph Schoess, but also on two fundamental factors: the focus on the essentials and the associated miniaturization of the camera unit, and the rapid technical development along with increasing performance of the young personal computer (PC). Only this combination made it possible to offer such a device at an affordable price. As a fully digital system, the HRT represented a quantum leap for imaging of the fundus at a time when color fundus photos were still taken with conventional films. It became the first diagnostic imaging device that was able to objectively quantify clinically relevant factors of the optic nerve head.

This marked the beginning of an era of innovative technical achievements in ophthalmology that continues today. A groundbreaking example of this is certainly optical coherence tomography (OCT), which today plays an outstanding role in routine clinical practice. Who would have thought at the time that the lateral resolution of imaging in the human eye with adaptive optics would be surpassed by the axial imaging resolution of OCT technology?

I consider it a fortunate coincidence and a privilege to have accompanied Gerhard as a unique pioneer in this field for more than 20 years and to be involved in the significant advancement of diagnostics in ophthalmology. I know that many of my colleagues feel the same way. As a true visionary, for Gerhard, only the best was good enough. His view was always directed forward and discussions were always held with the greatest respect as equals. Gerhard carefully put together a very efficient and harmonious team over the years. These colleagues have not only become experts in their fields, but are also highly innovative and exceptionally motivated.

In 2014, Gerhard appointed me Head of the Research and Development department. I am very grateful to him for the trust he placed in me that allows me to work with a distinguished team to lead Heidelberg Engineering into a successful future with innovative technical solutions.

While this book highlights technologies that have become an integral part of everyday clinical life, it also presents many methods, such as 2-photon imaging, which are not yet commercially available.

This era of rapid technical progress in ophthalmology is far from over, as the need for improved and novel diagnostic imaging is greater than ever before. It is not only necessary to deepen the understanding of physiology and pathophysiology, but also a necessary prerequisite to improve testing efficacy, allowing for the development and approval of new therapeutic approaches.

For example, the current treatment (OCTguided anti-VEGF therapy) of age-related macular degeneration only takes place in a late phase of the disease and only treats the symptoms, not their underlying causes. Important and specific indicators for the early form of AMD are the so-called basal laminar and basal linear deposits (BLamD/BLinD). BLamD is thicker and can be seen with OCT under some circumstances. BLinD is thinner and cannot be seen yet. Visualizing these deposits would be ideal for a corresponding future therapeutic treatment of early age-related macular degeneration.

It is not only the innovative technical development of the devices themselves that will overcome this and other challenges in the future and open up new paths to improve patient care, but also the exciting potential of artificial intelligence in medicine.

I am very curious to see where this journey will take us and very excited to be part of it.

# **Index**

#### **A**

Abbe's barrier obsolete, 5 Abbe's diffraction barrier, 15 Abbe's equation, 5, 11 ABCA4 mutation, 54 Aberration-free retinal imaging, 355 Aberrometer active mirror, 348, 349 clinical studies, 349 high coma aberration, 352 higher-order RMS values, 349 microchip mirror, 349 patient JS (OD), 348, 350 patient RB (OD), 348, 350 phase plate simulating human eye with high coma, 352 phase plate with 3rd order trefoil aberration, 348, 350 phase plate with 4th order spherical aberration, 348, 350 refractive surgical patient, 352 setup, 351 2π- phase wrapping control, 348, 349 WaveScanTM tunnel target, 352 Acanthamoeba disease, 287 Acousto-optic modulators (AOMs), 321, 365 Acquired vitelliform lesions, 91 Active eye tracking (TruTrack™), 71 Adaptive optical closed-loop control, 339, 340 Adaptive optical control system, 340 Adaptive optics aberration measurement distorted wavefront, 342 equal level contour maps, human eye, 342 incident plane wave, 342, 344 Shack-Hartmann method, 342 thin beam ray tracing aberrometer, 342 Tscherning aberrometer, 342 WaveScanTM instrument, 342 WaveScanTM measurements, 344 aberration-free or a diffraction-limited system, 378 aberration-free retinal imaging, 355 aberrometer active mirror, 348, 349

clinical studies, 349 high coma aberration, 352 higher-order RMS values, 349 microchip mirror, 349 patient JS (OD), 348, 350 patient RB (OD), 348, 350 phase plate simulating human eye with high coma, 352 phase plate with 3rd order trefoil aberration, 348, 350 phase plate with 4th order spherical aberration, 348, 350 refractive surgical patient, 352 setup, 351 2π- phase wrapping control, 348, 349 WaveScanTM tunnel target, 352 active focus control and/or wavefront sensing, 340 AOSLO, 360 closed loop systems, 339, 341 closed-loop adaptive optical control astronomical applications, 347 history, 347, 348 principle, 346 closed-loop controls, 339 cone classes, 356 core system, 360 cost and complexity, 378 daylight vision conditions, 339, 340 deformable mirror, 378 diffraction limit, 378 experimental setup cSLO, 384–387 mask structured ion exchange (MSI) technique, 388, 390 ocular aberrations, 386, 387 phase plates specifications, 389–390 retinal imaging with phase plates–experimental results, 390–393 wavefront compensation with phase plate, 388, 389 Zemax simulation, 388 for photoreceptor-targeted psychophysics (*see*  Photoreceptor-targeted psychophysics)

high resolution retinal imaging devices, 360

© The Author(s) 2019 397 J. F. Bille (ed.), *High Resolution Imaging in Microscopy and Ophthalmology*, https://doi.org/10.1007/978-3-030-16638-0

Adaptive optics (*cont.*)

history, 340, 341

optical image quality

wavefront aberrations

wavefront sensor, 378

higher-order aberrations, 341–342 human vision, 353, 354 lower-order aberrations, 342 ophthalmic scanning laser imaging systems, 378 modulation transfer function, 345–346 optical aberration index, 345 performance indices, human eye, 346 point spread function, 346 RMS, 342, 343, 345 twilight vision conditions, 339, 340 aberrated wavefront, 379 human eye, optical aberrations, 379–380 ocular aberrations (*see* Ocular aberrations) planar wavefront, 379 spherical wavefront, 379 typical optical imaging system, 378, 379 wavefront detection technology, 378 wavefront-guided laser refractive surgery (CustomVue), 356 **B C**

Adaptive optics combined with scanning laser ophthalmoscopy (AOSLO), 360 Adaptive optics optical coherence tomography (AO-OCT), 60, 81, 82 Adaptive optics scanning laser ophthalmoscopy (AOSLO), 60 Advanced glycation end products (AGEs), 216 Aerial image modulation (AIM) curve, 345 Age related macular degeneration (AMD), 55, 71–72, 88, 150–156, 164, 165, 213, 220–223, 237, 243 α-Oxoaldehyde, 227 Amplitude based flow quantification coherent technique, 171 complex amplitude and intensity signals, 172–174 speckle decorrelation, 173, 174 SSADA algorithm, 175 VISTA method, 175, 176 squared magnitude, 171 Amplitude scan (A-scan), 61 Anatomic positioning system (APS), 72, 118, 120 Angle opening distance (AOD), 288 Anterior axial curvature maps, 291 Anterior chamber angle (ACA), 288 Anterior chamber angle measuring tools, 288 Anterior segment OCT SD-OCT technology, 285 anterior chamber angle measuring tools, 288 cornea after refractive surgery, 287 predefined scan patterns, 286 sclera after trabeculoplasty, 287 sclera anatomy, 287 SPECTRALIS anterior segment module, 286 slit-Lamp OCT, 285 SS-OCT technology, 285 anterior chamber angle, 296 anterior segment imaging, 297

cataract evaluation, 292–295 cornea evaluation, 288, 290–292 time-domain OCT devices, 285 Zeiss Visante OCT™, 285 Anti-missile defense systems, 339 Aperture, 264 ArKr-laser, 35 Arrhenius theory, 239 Artificial convolutional neuronal networks, 252 ASIC chip, 347 Auto-fluorescence (AF) imaging, 51 Automatic real-time tracking (ART), 113, 114 Avalanche photodiodes (APD), 39, 218

Bessel function, 40 Best spectacle-corrected visual acuity (BSCVA), 356 Blue-light reflectance imaging, 225 BMO-based minimum rim width (BMO-MRW), 76 BMO-MRW analysis, 76 Bowman membrane, 273 Bowman's layer, 270 Branch retinal vein occlusion (BRVO), 147 Branching vascular network (BVN), 176 Brownian motion, 174 Bruch membrane, 221 Bruch's membrane, 72, 89, 91, 150, 238, 239, 243 Bruch's membrane opening (BMO), 76, 115 Bruch's membrane opening–minimum rim width (BMO-MRW), 116 B-scan, 61 Bull's eye maculopathy, 54

Capsulotomy, 310–311 Central retinal vein occlusion (CRVO), 147 Central serous chorioretinopathy (CSCR), 92, 94, 95 Central serous retinopathy (CSR), 237, 243, 245 Chirped pulse amplification (CPA) technique, 303, 304 Chopped pattern, 309 Chorioretinal atrophy, 94 Choroidal melanomas, 96 Choroidal neovascularization (CNV), 164 Choroidal osteoma, 98 Choroidal-scleral interface (CSI), 72 Chromatic dispersion compensation, 365–366 Circumpapillary RNFLT (cpRNFLT), 125 Clinical AOSLO microperimetry, 370 Closed loop adaptive optical system, 341 Closed-loop adaptive optical control astronomical applications, 347 history, 347, 348 principle, 346 Coma, 380 Combination of confocal scanning laser microscopy (CSLM), 201 Commercial SS-OCT devices, 87 Complex amplitude, 172–174

Cone targeted psychophysics, 366–369 Confocal microscopy, 6 Confocal scanning laser microscopy (cSLM), 263 Confocal scanning laser ophthalmoscope (cSLO), 69, 87, 108, 196, 384–387 Continuous wave (CW) lasers, 14–15 Conventional light-focusing microscope, 4 Coordinate-stochastic nanoscopy methods, 18 Corneal epithelium, 270 Cramér-Rao bound (CRB), 27 Cylindrical pattern, 310

#### **D**

Dark-state molecules, 5 Deep capillary plexus (DCP), 140 Deformable mirrors, 363, 378 Dendritic spines, 9 Descement's membrance, 271 Diabetic Charcot foot deformity, 275 Diabetic macular edema (DME), 243 Diabetic peripheral neuropathy (DPN), 275 Diabetic retinopathy (DR), 93, 146–149, 164, 226–229, 237 Dichroic mirror (DCSP), 22 Diffraction barrier, 5 Diffraction limit, 378 Digital Geiger mode, 39 Diode-pumped solid-state lasers, 303 Diopter power map, 330 Doppler OCT, 165 Drusen, 164 Dual-color isotropic nanoscopy, 25 Dye-based angiography, 136 Dynamic intensity minimum (DyMIN) approach, 21, 25 Dynamic light scattering optical coherence tomography (DLS-OCT), 172

#### **E**

Elastic scattering, 265 Electrical signalling, 254 Electron microscopy, 3 Electroretinography (ERG), 182 End tips of the photoreceptors (ETPR), 81 Endogenous fluorophores, 200 Env proteins, 10 Epiretinal membranes, 101 Exogenous fluorophores, 198 Extra capsular cataract extraction, 302

#### **F**

FD-OCT system, 68 Femtosecond-laser-assisted cataract surgery (FLACS) applications, 305–306 blindness, 301 chirped pulse amplification, 303, 304 clinical experience

capsulotomy, 314 optical axis, 312, 313 pupil center and apex of lens, 314 swept-source OCT system, 312, 313 VICTUS® femtosecond laser platform, 312 components, 303 dispersive mechanism, 303 elements, 303 extra capsular cataract extraction, 302 group velocity dispersion, 304 history of, 302, 303 nuclear, cortical, and posterior sub-capsular cataract, 302 optical coherence tomography, 306, 307 people's visual disability, 302 phacoemulsification, 302 prevalent ocular disease, 301 solid-state lasers, 303 treatment capsulotomy, 310–311 capsulotomy incision, 309 corneal incision, 309, 310 correction of astigmatism, 309 engagement, 308 lens fragmentation incision, 309 lens fragmentation pattern, 311 planning parameters, 308 safety, 312 visualization and customization, 309 Femtosecond laser excited fluorescence, 324–325 Femtosecond laser generated polar molecule, 327 Femtosecond laser-induced refractive index change, 320, 321 Flavin-adenine-dinucleotide (FAD), 215 Flavin-mononucleotide (FMN), 215 Fluorescein angiography (FA), 37, 48, 136 Fluorescence lifetime imaging (FLIM), 206 Fluorescence lifetime imaging ophthalmoscopy (FLIO) AMD, 214, 215 aromatic amino acids tyrosine, 216 bilirubin emit fluorescence, 215 clinical applications AMD, 221–223 diabetic retinopathy, 226–229 healthy eye, 219 MacTel, 225, 226 MP, 220, 221 retinal dystrophies, 223–225 extracellular matrix, 215 FAD, 215 lipofuscin, 214 living human retina, 214 MP, 215 natural endogenous retinal fluorophores, 214 ocular fundus autofluorescence, 214 phenylalanine, 216 phosphoroscope, 213 protein-bound NADH, 215 protoporphyrin IX, 216

Fluorescence lifetime imaging ophthalmoscopy (FLIO) (*cont.*) skin pigments melanin, 215 spectralis platform angiography and autofluorescence, 216 consecutive pulses, 218 FLIMX, 219 FLIO laser pulses, 216 FLIO super pixel, 218 fluorescence light, 217 fluorescence light path, 218 intrinsic fluorescence, 217 maximum laser power, 218 multi-exponential decay mode, 219 retinal fluorophore, lifetime measurement, 217 sensitive GaAsP photocathode, 218 single pulse, 218 SLO scanning system, 216 SPC image, 219 Spectralis FA-mode, 218 Spectralis FLIO, 216, 218 Spectralis FLIO system, 216 Spectralis OCT, 216 standard APD detectors, 218 XY pixel position, 218 TCSPC, 214 tryptophan, 216 two-photon excitation microscopy, 214 Fluorescence lifetime ophthalmoscopy (FLIO), 213 Fluorescence microscopy, 6 Fluorescence photon, 4 Fourier domain OCT (FD-OCT), 62, 165, 182 Fourier transformation, 62 4Pi microscope, 5 4Pi-based isoSTED, 20 Fovea-BMO center (FoBMOC), 118 Foveal avascular zone (FAZ), 144 Foveal spatial summation, 368 Fraunhofer diffraction, 40, 41 Frequency domain mode-locked lasers (FDML), 82 Full width at half maximum (FWHM), 202 Full-field (FF) OCT, 183 Full-field swept-source OCT (FF-SS-OCT), 169, 183 setup, 185 technical limitations, 191 Full-thickness macular holes, 98 Fundus autofluorescence (FAF), 213, 221–222 Fundus fluorescence angiography (FFA), 251

#### **G**

Ganglion cell layer (GCL), 76, 77 Garway-Heath sectors, 76 Gaussian beam, 41 Gaussian beam profile, 67 Gaussian-shaped power spectrum, 64 Geiger mode, 39 Genetically encoded calcium indicators (GECIs), 196 Geographic atrophy (GA), 89, 92, 153 Glaucoma, 76

Glaucoma module premium edition (GMPE), 114 Glaucoma probability score (GPS), 109, 111 Glyoxal, 227 GMPE posterior pole horizontal (PPoleH) scan, 122 Ground-based telescopes, 360 Ground state depletion (GSD) microscopy, 12 Group velocity dispersion (GVD), 186, 304

#### **H**

Heidelberg noise reduction, 286 Heidelberg retina angiograph 2 (HRA2), 69 Heidelberg retina angiography systems (HRA), 37 Heidelberg retina tomograph (HRT), 37, 42, 108, 263, 278 High magnification objective (HMO), 41 Higher-order aberrations, 339, 341–342, 380 Hollow core fiber (HCF), 204 HORIBA XploRA PLUS Raman Microscope, 322 Human eye equal level contour maps, 342, 344 higher-order aberrations, 378 higher-order RMS values, 349 optical aberrations, 355, 356, 379–380 optical system, 339 performance indices, 346 phase plate simulation, with high coma, 352 refractive errors, 339 Human immunodeficiency virus (HIV), 8 Human photoreceptor cells components, 181 cross-sectional OCT image, 183 electroretinography, 182 holographic OCT data evaluation, 186 FD-OCT, 182 full phase stability, 183 full-field (FF) OCT, 183 full-field swept-source (FF-SS) OCT, 183, 185, 191 microsaccades, 183 phase-stable imaging, 183 retina, 184 retina drifts, 183 tremor, 183 intrinsic optical signals, 182 molecular origin, 188, 190, 191 optical path length, 188, 189 phase evaluation, 188 retinal imaging and response, 187 rods and cones, 188, 190 technical limitations, 191 non-optical imaging, 182 objective methods, 181 optical imaging, 182 state-of-the-art clinical method, 181 subjective methods, 181 Human retina, 361 Hydroimidazolon, 227 Hydrophilic intraocular lens, 323 Hydrophilicity-based Δn change, 321

#### **I**

Image formation process, 382, 383 Impulse response, 382 In vivo confocal scanning laser microscopy cSLM based methods, 263 multiphoton microscopy, 281 non-invasive imaging technique, 263 non-ophthalmological applications, 276–278 OCT-guided *in vivo* confocal laser scanning microscopy, 280, 281 ophthalmological applications animal studies, 275 assessing peripheral neuropathies, 275 biomarker for disease staging, 275 clinical applications, 272 diabetic Charcot foot deformity, 275 diabetic peripheral neuropathy, 275 fungal keratitis, 272 keratocytes, 273, 274 light transmission microscopic techniques, 271, 272 subbasal nerve plexus, 273 principle of, 264–266 RCM, 267–270 slit lamp microscopy, 280 SNP, 263 subbasal nerve plexus mosaicking, 279 Indocyanine green angiography (ICGA), 48, 49, 136, 150 Inexpensive lasers, 14 Infrared light (IR), 195 Inner limiting membrane (ILM), 98, 251 Inner segment/outer segment junction (IS/OS), 186, 187 Intensity-based DLS-OCT (iDLS-OCT), 173 Interferometric approach, 360 Intermediate capillary plexus (ICP), 140 Internal limiting membrane (ILM), 68 Intraocular lens power adjustment, 331–333 Intraocular pressure (IOP), 107 Intrastromal corneal ring, 292 Intrinsic optical signals (IOS), 182 molecular origin, 188, 190, 191 optical path length, 188, 189 phase evaluation, 188 retinal imaging and response, 187 rods and cones, 188, 190 technical limitations, 191 In-vivo lens shaping proof of concept adjustment of sphere, 330, 331 diopter power map, 330 monofocal IOL to toric IOL conversion, 331, 332 monofocal to multifocal IOL conversion, 331, 332 original modulation map, 330 repeatability, 331 shaping algorithm, 329 isoSTED microscope, 22 Iterative algorithmic process, 364

#### **J**

Johns Hopkins system, 302 Jones matrix, 80

#### **K**

Karhunen-Loève wave expansion, 340 Keratocytes, 273, 274 Ki-67 marker, 243

#### **L**

Laser-induced fluorescence (LIF) microscopy, 322 Laser in situ keratomileusis (LASIK) surgery, 303 Laser scanning tomography (SLT), 35 Laser tomographic scanner (LTS), 43 contour line, 44 follow-up and progression analysis, 47 glaucoma diagnostics, 48 HRTII/HRT3 data acquisition work flow, 43, 44 HRTII/HRT3 data processing, 44, 46 Moorfields regression analysis, 44–48 reference plane, 44 stereometric parameters, 44 Leber miliary aneurysms, 147 Lens fragmentation pattern, 309, 311 Lens opacity classification system, 302 LenSx femtosecond laser system, 310 Light microscope, 3 Limiting membrane (ILM), 72 Linear fluorescence imaging (LFI), 201, 203 Line-field SS-OCT system, 83 Lipofuscin, 51, 52, 214 Live-cell imaging, 9 Longitudinal chromatic aberration (LCA), 366 Lower-order aberrations, 342, 379 LS-RESOLFT concept, 23 Luminescence, 201

#### **M**

Macular edema, 93 Macular pigment (MP), 215, 220 Macular telangiectasia, 147, 150, 151 Macular telangiectasia type 2 (MacTel), 220, 225, 226 Mask structured ion exchange (MSI) technique, 388, 390 Matrix metalloproteinases (MMPs), 243 Maximum permissible exposure (MPE), 185 Mean projection, 139 MERILAS SRT laser, 253 Methylglyoxal, 227 Microbubble formation (MBF), 240, 245, 246 Microchip mirror, 349 Micro-lens array, 342 Mie scattering, 270 Mie theory, 270 MINFIELD STED microscopy, 20, 21 Modulation transfer function (MTF), 321, 345–346, 383 Monte Carlo simulation, 270 Moorfields regression analysis (MRA), 44–48, 109 Motion artifacts, 145, 146 M-scan, 249 MultiColor imaging, 75 Multiphoton microscopy, 281 Multiple off-state transitions (MOST), 23 Myopia, 96

#### **N**

Neodymium-doped glass (Nd:glass), 303 Neodymium-doped yttrium aluminum garnet (Nd:YAG), 303 Neovascular AMD, 92, 223 Nerve fiber layer vascular plexus (NFLVP), 140 Neuropathy deficit score (NDS), 273 Neuropathy symptoms score (NSS), 273 Nicotinamide-adenine-dinucleotide (NADH), 215 Non-proliferative diabetic retinopathy (NPDR), 228 Non-signaling state, 5 Nuclear pore complex architecture, 9

#### **O**

Objective methods, 181 OCT angiography (OCTA), 77, 216 OCT elastography (OCE), 79 OCT-based velocimetry AMD, 161 amplitude-based methods, 162 blood flow measurement amplitude based flow quantification (*see*  Amplitude based flow quantification) blood flow quantification, 162 diabetic retinopathy, 161 disease biomarker, 161 glaucoma, 161 measurement uncertainty, 161 neovascularization, 162 OCT beam, 162 phase-based methods, 162 quantitative OCTA, 162 retinal blood flow measurements AMD, 164, 165 blood pressure and tissue oxygenation, 163 diastole and systole, 163 DR, 164 glaucoma, 164 healthy eyes, 164 inter-device reproducibility, 163–164 quasi-binary signal reconstruction task, 163 scanning laser Doppler flowmetry, 162 volumetric flow rate, 163 OCT-guided *in vivo* confocal laser scanning microscopy, 280, 281 Ocular aberrations assessment, 383 measurement, 386, 387 point spread function, 382–383 root mean square, 382 Strehl ratio, 383 wavefront error, 380 Zernike polynomials, 380 Ocular hypertension, 107, 339 Ocular hypertension treatment study (OHTS), 112 Ocular motion, 182 Ophthalmic coordinate system, 382 Ophthalmic diagnostic imaging diagnostic modalities, 107

glaucomatous damage, 107 HRT clinical development, 109, 111 clinical validation, 112 stereometric parameters, 108, 110 surrogate endpoints and progression, 112, 113 intraocular pressure measurements, 107 neuroretinal rim thinning, 108 ONH size and ocular magnification, 119–121 ophthalmoscopic examination, 107 RNFL atrophy, 108 serial stereoscopic photographs, 107 SPECTRALIS SD-OCT age, 121 axial length, 122 BMO, 115, 116 BMO-MRW, 116 BMO-MRW asymmetry, 122 clinical assessment, 114, 115 dual-beam tracking system, 114 FoBMOC axis, 118 glaucomatous progression, detection of, 124–126 ONH-RC scan, 117 posterior pole, 122, 123, 125 6-sector analysis, 114 spherical equivalent (SE), 122 tilted disc phenomenon, 122, 124 visual field analysis, 107 visual field testing, 107 Ophthalmic scanning laser imaging systems, 378 Optic nerve head (ONH), 43 Optical aberration index (OAI), 345 Optical coherence tomography (OCT), 69, 135, 223, 244 contrast mechanisms and new technologies adaptive optics, 81, 82 high speed OCT, 82, 83 OCTA images, 77 OCTE, 79 PS-OCT, 79–81 retinal blood flow, quantitative measurement of, 77, 78 vis-OCT, 78, 79 diabetic retinopathy and glaucoma, 60 fast scanning rates, 59 in ophthalmic applications, 306, 307 interferometric measurement method, 59 interferometry technique, 87 macula diseases, 60 non-contact imaging technique, 59 principle of back-reflected waves, 60 broadband source, 60 coherent waves superimpose, 61 FD-OCT, 63, 65 fiber-based implementation, 61 Fourier amplitude, 65 Fourier transform, 65 interference fringe bursts, 61 lateral and axial resolution, 65–67 low-coherence interferometry, 60

pulsed laser source, 61 sensitivity and roll-off, 68 signal averaging and speckle, 69 single interferograms, 63 spectral interferogram, 62 SS-OCT, 62 TD-OCT, 61, 63, 65 time-domain approach, 64 quick signal processing, 59 retina acquisition protocol, 99, 100 acquisition technique, 99 AMD, 88–92 CSCR, 94 cuticular drusen, 89 diabetic retinopathy, 88, 93 hereditary dystrophies, 88 inherited retinal diseases, 94–96 intermediate and posterior uveitis, 96 interpretation of, 100–102 intraocular tumors, 96 macular edema, 93 pathologic myopia, 94 retinal vascular disease, 88, 93 vitreo-retinal interface, 98, 99 retinal diagnostics, interpretation of, 99 SPECTRALIS device acquisition speed and sensitivity, 70 acquisition window, 70 APS, 72 ART mean, 71 BMO-MRW analysis, 76 EDI mode, 72 eye motion, 70 FUP, 71 GCL, 76, 77 IR confocal imaging, 69 multi-layer segmentation, 72, 74 nerve fiber layer thickness analysis, 75 RPE, 72 scan density, 70 speckle reduction, 71 time-domain detection, 87 Optical coherence tomography angiography (OCTA) adjusted segmentation boundaries, 135 algorithm, 135 clinical application of AMD, 150–156 DR, 146–149 macular telangiectasia, 147, 150, 151 retinal vein occlusion, 147, 149, 150 clinical application of DR, 148 dye-based angiography, 136 dynamic phenomena, 136 fluorescence angiography, 135, 136 ICGA, 136 image artifacts and countermeasures lateral and axial resolution, 146 motion artifacts, 145, 146 projection artifacts, 144

segmentation artifacts, 144, 145 image construction, 135 metrics, 136 standard structural OCT, 135 technical foundation data visualization, 137–139 multiple adjacent B-Scans, 136 projection methods projection method, 139, 140 quantification of, 142–144 retinal vascular network, 140–142 signal processing and image construction, 137 speckle pattern, 136 Optical image quality modulation transfer function, 345–346 optical aberration index, 345 performance indices, 346 point spread function, 346 RMS, 342, 343, 345 Optical point spread function (PSF), 146 Optical transfer function (OTF), 383 Original modulation map, 330 Otorhinolaryngology, 277 Oxford Clinical Cataract Classification and Grading System, 302 Oxygen saturation, 78

#### **P**

Pachymetry map, 290 Parabolae, 171 Parkinson's disease, 275 Pathologic myopia, 94 PECTRALIS instrument, 60 Perimetry, 362 Peripheral capillary non-perfusion, 50 Perivascular sheathing, 50 Phacoemulsification, 302 Phacofragmentation, 309 Phase-based methods actual flow velocity, 165 axial motion, 165 circumpapillary scan, 167, 168 digital filtering, 169, 170 Doppler frequency, 165 Doppler frequency bandwidth, 170, 171 en face plane Doppler OCT, 168 flow velocities, 166 Fourier transformation, 165 gray phase noise probability density function, 166 mean axial flow velocity, 166 multi-beam methods, 168, 169 positioning error, 166 probability density function, 167 retinal imaging, application to, 167 shot noise, 166 single scattering particle, 165 Phase wrapping, 321 Photo-activated localization microscopy (PALM), 17 Photochemical process, 333 Photocoagulation and photodynamic therapy (PDT), 94 Photo-induced hydrolysis, 322 Photomultiplier tubes (PMT), 39 Photopigments, 182 Photoreceptor-targeted psychophysics cell-resolved imaging, 369 clinical AOSLO microperimetry, 370 cone mediated vision, 362 hematoxylin and eosin stain, 361, 362 human retina, 359 image forming process, 359 *in vivo* AOSLO imaging, 361, 362 macaque retinal flatmount, 361 optical coherence tomography, 359 retina's cellular composition, 360 scanning laser ophthalmoscope, 359 visual function testing chromatic dispersion compensation, 365–366 cone targeted psychophysics, 366–369 image motion compensation, 364, 365 monochromatic aberration correction, 363, 364 stimulus light modulation, 364, 365 Phototransduction, 362 Photo-transduction cycle, 182 Point spread function (PSF), 265–266, 346, 363, 382–383 Polarization sensitive OCT (PS-OCT), 79–81 Polarizing beam-splitter (PBS), 21, 22 Poly-2-hydroxyethylmethacrylate (PHEMA) polymer, 328 Polymethylmethacrylate (PMMA) lens, 302 Polypoidal choroidal vasculopathy (PCV), 92, 175, 176 Positioning error, 166 Posterior pole asymmetry analysis (PPAA), 122, 125 Posterior slab boundary surface, 139 Principal component analysis (PCA), 174 Projection artifacts, 144 PSD95-HaloTag, 24 Pseudoxanthoma elasticum (PXE), 96 Pyrralin, 227

#### **R**

Rayleigh equation, 241 Rayleigh scattering, 270 Reactive oxygen species (ROS), 227 Real-time eye tracking system (TruTrackTM), 113 Reference calibration factor (RCF), 53 Reflectance confocal microscopy (RCM), 60 Refractive index shaping (RIS) technology application, 334 chemical basis blinking, 324 excitation/emission spectra, 327 femtosecond laser generated polar molecule, 327 fluorescent light, 324 hydrophilic intraocular lens, 323 hydrophilic material, Raman spectra of, 327, 328 hydrophobic RIS lenses, 324–326 photo-induced hydrolysis, 322, 323 spatially distributed fluorophores, 324

spectral band assignments, 329 diffractive multifocal IOL to monofocal IOL conversion, 321 dioptric power, 334 femtosecond laser-induced refractive index change, 320, 321 hydrophilicity-based Δn change, 321 implanted premium IOLs, 319 intraocular lens power adjustment, 331–333 in-vivo lens shaping proof of concept adjustment of sphere, 330, 331 diopter power map, 330 monofocal IOL to toric IOL conversion, 331, 332 monofocal to multifocal IOL conversion, 331, 332 original modulation map, 330 repeatability, 331 shaping algorithm, 329 microscope study IOL materials, 322 LIF microscopy, 322 Raman microscope, 322 STED microscopes, 322 phase wrapping, 321 photochemical process, 333 photo-induced hydrolysis, 334 postoperative lens customization, 334 power adjustment, 334 special protective spectacles, 334 RESOLFT nanoscopy, 15 Reticular pseudodrusen, 89 Retina, cellular structure, 362 Retinal autofluorescence imaging, 219 Retinal ganglion cells (RGCs), 114, 195 Retinal light based therapy, 237 Retinal nerve fiber layer (RNFL), 72, 75, 109, 110 Retinal nerve fiber layer thickness (RNFLT), 124 Retinal photopigment, 182 Retinal pigment epithelium (RPE), 65, 68, 81, 89, 164, 238 Retinal signaling, 196 Retinal therapy, 237, 238 Retinal vascular disease, 93 Retinal vein occlusion, 94, 147, 149, 150 Retinitis pigmentosa, 54, 96, 224 Reversible saturable/switchable optical linear (fluorescence) transitions (RESOLFT) microscopy, 20, 23 Reversibly switchable fluorescent protein (RSFP), 24 Root mean square (RMS) error, 342, 343, 345, 382 Rostock cornea module (RCM), 267–271 RPE dysfunction, 89 RPE-Bruch's membrane, 89

#### **S**

Scanning laser ophthalmoscope (SLO), 59, 341 AF imaging systems origin and spectral characteristics, 51, 52 qAF methodology, 54 quantitative measurements, 52–54

standardized approach, 54 core components beam splitter, 38, 39 detectors, 39 imaging optics, 39 laser source, 37, 38 scan unit, 37, 38 frame grabber boards, 35 high resolution image, 41–42 history, 35 ICGA, 48–50 LTS contour line, 44 follow-up examinations, 47 glaucoma diagnostics, 48 HRTII/HRT3 data acquisition work flow, 43, 44 HRTII/HRT3 data processing, 44, 46 Moorfields regression analysis, 44–48 progression analysis, 47 reference plane, 44 stereometric parameters, 44 modern confocal, 36, 37 resolution of beam waist, 41 confocal aperture, 41 Fraunhofer diffraction, 40, 41 limitations and numerical aperture, 39, 40 widefield angiography, 49 Scanning laser polarimetry (SLP), 79 Scanning-slit confocal microscope (SSCM), 265 SD-based line-field OCT systems, 83 SD-OCT devices, 87 Segmentation artifacts, 144, 145 Selective retina therapy (SRT) Arrhenius theory, 239 Bruch's membrane, 239, 243 bubble clusters, 241 cellular rejuvenation, 239 CSR, 245 cytokine, 242 direct light transmittance, 244 disadvantage, 245 DME treatments, 244 dosimetry and dosing control broad spectral bandwidth, 248, 249 clinical applications, 246 light reflection, 247, 248 MBF, 245, 246 optoacoustics, 247 small spectral bandwidth interferometry, 248 fluorescein angiogram, 244 histology and electron microscopy, 242 Ki-67 marker, 243 laser beam, 240 MBF, 241 MBF threshold, 242 microbubble occurrence, 240 module integration electrical signalling, 254 Heidelberg SPECTRALIS platform, 253

investigation systems, 254 MERILAS SRT laser, 253 Q-switched laser, 253 SPECTRALIS Centaurus system, 253 SPECTRALIS Hydra, 253 SPECTRALIS platform, 253 SRT-OCT setup, 255 OCT bubble build-up, 251 FEA scan, 251 fringe-washout, 249 intensity decorrelation, 249–250 microbubble threshold, 251–253 M-scans of *ex vivo* treatments, 250 pre-clinical studies, 251 TF analysis, 250 Q-modulated frequency-doubled Nd:YLF-laser, 243 Q-modulated Nd:YLF laser, 239 q-switched laser pulses, 240 radiant exposure, 242 Rayleigh equation, 241 repetitive laser pulses, 239 retinal therapy, 237, 238 RPE, 238–240, 243 selective RPE damage, 245 therapeutic outcome, 244 Semiconductor optical amplifier (SOA), 82 Semiconductor-based lasers, 303 Shack-Hartmann method, 342 Shack-Hartmann sensors, 339 Shack-Hartmann wavefront sensor, 341, 342, 347, 360 Shot noise, 166 Signal-to-noise ratio (SNR), 113 Silicon photomultiplier (SiPM), 39 Silicon photomultipliers (SiPM), 39 Simultaneous widefield fluorescein, 49 Single doughnut, 18 Single photon absorption (SPA), 201 Single photon avalanche photodiodes (SAPD), 39 Sinonasal inverted papilloma, 277 6-sector Garway-Heath analysis, 123 Slit lamp microscopy, 280 Slit-Lamp OCT (SL-OCT), 285 SLO scanning system, 216 SLO-based approach, 79 Solid-state laser, 303 Sorsby fundus dystrophy, 94, 97 Spatial frequency response, 383 Speckle, 69 Speckle decorrelation, 173, 174 Spectral domain OCT (SD-OCT), 59, 62, 108 Spectral OCT-interferograms, 64 Spectral-domain OCT (SD-OCT), 108 anterior chamber angle measuring tools, 288 cornea after refractive surgery, 287 predefined scan patterns, 286 sclera after trabeculoplasty, 287 sclera anatomy, 287 SPECTRALIS anterior segment module, 286 Spectral-domain OCT (SD-OCT) technology, 285

SPECTRALIS anterior segment module (ASM), 286 SPECTRALIS centaurus system, 253, 254 Spectralis FLIO system, 216 SPECTRALIS HRA, 69 SPECTRALIS HRA+OCT, 69 SPECTRALIS OCT, 69 Spectralis scan unit, 204 Spectralis SLO path, 52 Spectralis® device, 87 Spectrometer based OCT (SD-OCT), 62 Spherical aberration, 380 Split-spectrum amplitude decorrelation angiography (SSADA) algorithm, 175 Standard wavefront error description, 380 Staphylomas, 94 State-of-the-art clinical method, 181 STED microscopy, 7, 13 STED nanoscopy, 10 STED/RESOLFT microscopy, 23 STED-beam photons, 12 STED-like approaches, 17 Stereometric parameters, 108–110 Stimulated emission depletion (STED) microscopy, 322 beam of light, 6 cis-trans isomerization, 14 CW lasers, 15 dark molecules, 12 dark-state molecules, 7 dendritic spines, 9 eight-fold symmetry, 8 fluorophore's chemical environment, 12 focal spot of light, 15 inexpensive lasers, 14 live-cell imaging, 9 long-lived states, 14 MINFLUX, 29 MINFLUX concept, 26 molecular transition, 15 molecule's position, 26 narrower rings, 8 nitrogen vacancies, 12 Nobel Prize, 20 on/off game, 12 "on/off"-transition, 20 PALM concept, 26 PALM parallelization, 19, 26 PALM principle, 26 parallelization, 16 pattern of light, 18 photon-molecule interaction, 11 physical condition for, 7 red-shifted beam transferring, 7 red-shifted photons, 7 schematic setup, 21 separation by states, 14 single donut, 23 sparse fluorophore distributions, 21 state lifetimes, 12 STED-beam photons, 12 STED-like approaches, 17

stimulated emission, 6 subdiffraction resolution, 8 subdiffraction -resolution imaging, 8 super-resolution methods, 20 "widefield" arrangement, 18, 19 Strehl ratio, 383 Subbasal nerve plexus (SNP), 263 Subbasal nerve plexus mosaicking, 279 Subjective vision testing, 181 Sum projection, 139 Superficial vascular plexus (SVP), 140 Superluminescent diode (SLD), 62, 70 Super-resolution microscopy, 24 Surgical induced astigmatism (SIA), 293 Swept-source OCT (SS-OCT) technology, 62, 63, 285 anterior chamber angle, 296 anterior segment imaging, 297 cataract evaluation, 292–295 cornea evaluation, 288, 290–292 Swept-source OCT system, 312, 313

#### **T**

Tandem scanning confocal microscope (TSCM), 265 Temporal coherence, 60 Thin beam ray tracing aberrometer, 342 Three-beam illumination method, 168 Three-dimensional (3-D) sectioning, 195 Time correlated single photon counting (TCSPC), 213, 218 Time-domain OCT devices, 285 Time-domain OCT technology (TD-OCT), 59, 61, 108 Time-resolved OCT, 249 Topographic change analysis (TCA), 109 Toric intraocular lens (IOL) calculation, 293 TPE imaging systems, 196 Transverse chromatic aberration (TCA), 366 Trefoil, 380 TruTrack active eye tracking, 286 TruTrack eye-tracking, 118 Tscherning aberrometer, 342 Tscherning ray tracing, 339 2π- phase wrapping control, 348, 349 2.0 D refractive index shaping lens, 330 2D scanning system, 37 Two-photon absorption (TPA), 201 Two-photon excitation (TPE) fluorescence imaging cell types, 197 confocal reflectance and two-photon images, 204 confocal reflectance image, 206 fluorescence lifetime maps, 206 future application, 209 image retinal neurons, 198 imaging retinal neurons, 196–198 *in vivo*, 199, 200 in vivo confocal reflectance, 207 *in vivo* CSLO images, 199 IR light, 195 LED flashes, 200 linear SPA *vs.* nonlinear TPA imaging, 203, 204

longitudinal in vivo CSLO images, 198 luminescence, 201 optical resolution, 202 photoreceptor spectral sensitivitie, 199 pulse measurement, 206 retinal cells, 200 retinal signaling, 196 RGCs, 207 simultaneous confocal reflectance, 205 SPA, 201 Spectralis scan unit, 204 Thy1-GCaMP3 mouse, 208 TPA, 201 TPA probability and dependencies, 201, 202 Two-photon excitation fluorescence imaging (TPEFI), 201 Typical optical imaging system, 378

**U** Ultra widefield fluorescein angiography, 49

#### **V**

Variable interscan time analysis (VISTA) method, 175, 176 VICTUS laser system, 307 VICTUS® Femtosecond Laser Platform, 312 Visible light OCT (vis-OCT), 78 Vision cycle, 182 Visual function testing chromatic dispersion compensation, 365–366 cone targeted psychophysics, 366–369 image motion compensation, 364, 365 monochromatic aberration correction, 363, 364 Stimulus light modulation, 364, 365 Visual function tests, 362 Visual impairment, 301 Vitreomacular adhesion, 98

#### **W**

Wavefront aberrations aberrated wavefront, 379 human eye, optical aberrations, 379–380 ocular aberrations assessment, 383 point spread function, 382–383 root mean square, 382 Strehl ratio, 383 wavefront error, 380 Zernike polynomials, 380 planar wavefront, 379 spherical wavefront, 379 typical optical imaging system, 378, 379 Wavefront detection technology, 378 Wavefront distortions, 339 Wavefront error, 380 Wavefront-guided laser refractive surgery (CustomVue), 341, 356 Wavefront map, 342 Wavefront sensor, 378 Wavefront technology, 339, 341, 347 WaveScanTM instrument, 342 WaveScanTM measurements, 344 Widefield angiography, 49, 50 Widefield fluorescein angiography, 49 Widefield OCT imaging, 73, 75

#### **Z**

Zeiss Visante OCT™, 285 Zemax simulation, 388 Zernike polynomial expansion, 342 Zernike polynomial functions, 380 Zernike polynomials, 340, 342, 343, 380 z-scan, 44